While at Texas Instruments, around 1976, I was in the group trying to move the company to use a more modern programming language. I had learned Pascal while in grad school by studying Wirth’s compiler source so I was happy to be in this department. We had built a commercial, extended, Pascal that ran on the company’s own hardware, microprocessors though minicomputers.
The goal was to move the COBOL, Fortran, JOVIAL, and assembly language programmers in the company to Pascal. My department built the compiler and because I had taught programming in grad school I ended up being one of the evangelists that taught innumerable one week crash courses to the company’s engineers, business programmers, and software people. It was actually a really fun job and I met a large number of people that worked in the company’s various branches.
I discovered that there is no better way to learn every facet of something than to teach it, more than once, to grumpy students sceptical of the new fangled concepts.
A couple years later I was back in grad school again, still programming in Pascal. It was interesting to realize that most fellow PhD students that go straight through aren’t really good programmers and don’t understand software engineering. (This was a long time ago so it’s probably very different now. CS students own their own computers and are able to program much more than we could.)
I remember Pascal fondly like some friends remember their first car. Like cars though, the new languages are really much better (safer, faster, more reliable, etc).
>This was a long time ago so it’s probably very different now.
As a recent CS undergrad and current software engineer in a research organization, I don't think it is. Most of the people I worked with on group assignments in my undergrad had, at best, a limited understanding of their prior coursework, much less any knowledge acquired on their own time. Furthermore, CS PhDs in my org don't know anything about software engineering and mostly don't care. They are really here to do math. They don't need to write good code because I can transform their mediocre (from an SE perspective) code that handles the base case of a complex algorithm into better code that is maintainable, handles error cases and integrates with the rest of our codebase.
I don’t think its different for CS grads today, and I don’t it is supposed to be. CS at a university is not meant to educate software devs; software dev is just one tool of a CS researcher — and depending on research field software dev/languages can be the object og research. A CS grad at uni is meant to teach the student to do CS research, ie. stay in academia.
But I think CS grads are very popular for software dev positions, because of their very heavy theoretical knowledge within the field, which provides the basis for becomming a very good software dev.
It's not different for CS grads today. Half of my class was playing Quake 3 during classes :/.
But one thing my school did perfectly, is force 3 months of internships per year, and leave at least 1 day per week without classes (and 2 days for the last 2 years).
There is nothing better for learning than experience. The first 2 years I did unpaid internships, by my third year I knew enough to get paid 20$ / hour working part time. By my last year of school I was making 35$ / hour 2 days per week without even having a degree, that's over 2400$ while I was still studying 3 days a week. And not only money, I got a lot of hands-on experience.
> His brother John, working at the movie visual effects company Industrial Light & Magic, found it useful for editing photos, but it wasn’t intended to be a product.
John Knoll is a name that may be familiar to people who stick around after the credits; he was ILM's supervisor on Generations and First Contact, all the Star Wars prequels, some Harry Potter movies, as well as some of the Pirates of the Caribbean movies. His name comes up a lot in behind-the-scenes interviews etc. with directors about effects.
The original title "Adobe Photoshop was written in Pascal" has been changed, and it's a pity - what is important in this story is the fact that the first commercial versions of one of the most popular pieces of software were written in a language frowned upon by many programmers today. Moreover, it shows that you can successfully write the app in one language, then rewrite it in something else while maintaining a smooth continuity between versions.
There are many big and well-known software packages written in Delphi. Yet Delphi the language is not well known.
I recently read an article on Flutter comparing to native Android layouts, and thought to myself: Delphi has had a x-plat framework that does all that, and more, in less lines of code, for six years.
(*) Disclaimer: I work for the company that makes Delphi. I do that because I think it and its sister C++Builder are great products and want to see it better-known.
There's a big obstacle to Delphi adoption that people below have mentioned: Cost. Even the starter pack has a limit of $1000 in revenue, and Delphi costs $3000 or so. I'm never going to use Delphi if it means that five years later, my app that costs $3 and makes a sale a week will need me to spend $3000 licensing it or open me up to lawsuits.
I like Autodesk's license for Fusion, it's free to use (the full version) until you're making $100,000 in yearly revenues, after which it costs some amount of money. If I were making $100k in revenues (and probably well before that), I would happily pay for the tool that's making me money, but paying 3x my revenue for the IDE/language is hilariously prohibitive. There's also no way in hell I'm going to spend $3000 for a hobby project that may or may not make $100 during its lifetime.
It's too bad, too, because I remember how easy and quick developing in VB4/VB5 was, so Delphi sounds great on paper, and I would definitely use it to develop at least some applications if it weren't so expensive.
>It's too bad, too, because I remember how easy and quick developing in VB4/VB5 was, so Delphi sounds great on paper, and I would definitely use it to develop at least some applications if it weren't so expensive.
I have been having these thoughts for the past few weeks. After the watching DHH's RailsConf Keynote and thinking about Conceptual Compression, and current technology. Why have we made everything 10x more complicated over years. It is not just Web Sites, Apps as well. VB may be ugly, but it lower the entry barrier. ( VB.Net is an entirely different thing ). Delphi literally have everything done right for well over a decade and it's idea and philosophy aren't being copied anywhere.
Instead of making software easier for everybody to learn, easier to maintain and less prone to bug, making debugging easier, higher performant and less recourse hungry. We have managed to bloat the framework, the languages, the CPU and memory resources, trillion times more abstracted. Everything is a hack over another hack.
Exactly. The problem with Delphi is that it tries to be too many things, for example we don't need another language ecosystem, but we do need great cross-platform UI toolkits/development workflows. Delphi excels at that, but if the library I want to use isn't available for the language, I'm out of luck.
If we could keep the great dev tooling and UI toolkit, but use a language with a larger ecosystem, that would be much more impactful. Maybe Python + Qt can be the answer to that.
Have you looked at Lazarus[1]? It's open source, has a cross-platform GUI framework (wrapping native widgets) with a VB-/Delphi-like GUI builder and uses the Free Pascal compiler[2] which implements a supposedly fairly Delphi-compatible Object Pascal dialect.
I think the cost of Delphi is more a sign of the fact that not a lot of people adopted it, or at least not a lot of people currently use it. I tried to use Delphi when it first came out as Pascal was my favorite language at the time. But the IDE was incredibly buggy and would crash on a regular basis, so obviously not a lot of people adopted it then and probably never gave it a second chance.
Delphi 1.0 struggled as a 16-bit Windows application as the transition to 32 bit happened. The next version was considerably more stable.
Delphi was awesome through 7.0. I didn't really track it after that, but I was a Delphi programmer for several years from release onwards (happened to come from a college that taught Pascal for the first year of CS), then worked for Borland on Delphi/C++Builder for a couple of years after that.
Borland really mishandled their language stack by alienating all the grassroots. Nothing against the later owners, but Delphi's pricing became absurdly tilted towards legacy enterprise projects.
Lazarus was mentioned earlier. Think they're fully compatible with the 7.0 state of things, which was decent. It's worth checking out for project prototyping, etc.
Lower the prices... Or make it more accessible like how JetBrains did before they switched to a subscription model? That would give you instant developer love.
But my budget for hobby projects is $0, and a time-limited trial isn't useful. Therefore I won't try it, and I won't be advocating it to anyone else either.
Even so, it’s pretty mediocre that it only builds Windows apps, and putting MacOS and Linux support on different tiers $1500 dollars apart is pretty cruel.
I am a Photoshop engineer. Many years ago it was transpiled to C++ that looked very much like Object Pascal. Much of that original legacy survives to this day.
I don't know exactly. It was transpiled before I came on the scene, which was 1996. The transition probably happened around the same time the Macintosh OS transitioned from Pascal to C-based APIs, and/or Photoshop was ported to Windows (around the Photoshop 2.5 timeframe.)
I don't know why Pascal has such a poor name. I don't enjoy using it (only because I prefer braces or whitespace), but it's much higher level than C++ with compilers that emit pretty tight code. That is pretty rare nowadays. There are many reasons to praise it.
I'm not talking about titles in general, you're probably right, but in this particular case I think this was the crucial bit of information for me - the code was released many years ago, and I'm sure only a fraction of HNers read the articles in full so many would never notice this important piece of information.
The way generally HN works is - if you post something, you don't get to decide what the critical bit of information in it is - you aren't allowed to editorialize. So if you want to highlight something, you're supposed to do it in comments or write your own thing that highlights the bits you think are interesting and post that. This makes for some anodyne titles but it's not hard to see why it's probably saner than the alternatives.
Indeed that seems to be the moral of the story for a lot of cases that I know of:
- plain JavaScript/jQuery to React
- Ruby on Rails/Node.js to Golang
Many people might dismiss the former as an inferior language, but the point is, you might not have enough time to write a successful software in latter given the time and window of opportunity. And the initial boost of productivity with a simpler language might be important.
Of course the reverse can be bad as well when you have an unmaintainable 50k lines jQuery app and nobody wants to rewrite that with a proper framework.
everything was written in pascal back then, not just Photoshop. I spent years writing pascal as a professional mac developer before Lightspeed C (which became THINK, then Symantec C later on) appeared... Then I never looked back.
MPW (Macintosh Programmer's workshop) was pretty awesome, if slow as hell -- Turbo Pascal was a lot better to work with -- the only issue is that it 'tokenized' your source code, so it wasn't plaint text anymore...
I still miss these 'one pass' compilers; I think it peaked with MetroWerks toolchain (which kept a Pascal plugin for a long time!) that was IMO the best compiler/linker/debugger toolset ever made.
They were not forced, it was their choice, and the choosen editor (vs6) do have search/replace. My personal guess is that keeping those BEGIN/END was seen as a smart move.
I guess they just liked the way it looked & felt? Really doesn't seem to make a difference to the development other than it feeling more familiar to Pascal devs.
It doesn't harm the development, indeed. However it does harm the developer's freedom in choosing his tools, for example Vim will try to fold regions delimited by brackets in C files. Nothing unbearable in the end.
Pascal was a brilliant intro language, but Turbo Pascal IDE – at least for me – made it unbearable (yellow on blue...). It was a bliss when I switched to Visual Studio and C.
However, MPW still holds one of my best programming experiences: I discovered it quite late, on an iBook, shortly before Xcode introduction. On MPW I wrote a simple cli math calculator, where you type your full equation like in a notebook – implementing an array walk with pseudo-regex. MPW was simple enough and clear enough that it was the first moment I thought I could really handle coding.
Frankly, I really miss the Turbo Debugger. The TUI mode of GDB is nowhere near the clarity of the plain old TD. There were some efforts like cgdb but that's not it either.
I actually am red-green colorblind – in the dot test, I see the dots are red and green, but I don't see the number in the picture. I just hated how TP looks.
No it wasn't. In fact most Mac apps in the 80's after 1984 were written in C. All three of ours from 85 - 94 were in C. We were early adopters of LightspeedC and then later Corewarrior. MPW was a waste of time.
The core underpinnings of the Mac, including QuickDraw, were written in Pascal, with some 68k assembly in critical speed sections.
QuickDraw began life in 1979 written by Bill Atkinson in Pascal and still ships in MacOS High Sierra. Photoshop originally lifted its main design metaphors and toolbar icons from Atkinson's (and Susan Kare's) MacPaint.
So plain C always had very little place at Apple, even at NeXT it was all about Objective-C and Objective-C++, including the driver framework.
Both A/UX and NeXTSTEP used UNIX compatibility as a means to bring software into the platform, but the goodies were in the platform specific frameworks.
I remember these years well, and there were years where C (not C++) was prevalent on the mac dev market.
Metrowerks only appears quite late, with the PowerPC macs -- I remember having both a prototype of the PowerPC 601 66Mhz pizza box and the fancy new compiler on the block delivered from apple (I remember the excitement -- that sort of excitement new tech delivered that I never really found again after that decade!). I stayed on the 'permanent beta list' of metrowerks until they vanished.
Before that, THINK C/Symantec C/C++ existed and were in use extensively including inside of apple, but most of the software was plain C, or "base C++" (it was before the time of templates and stuff like that) at best...
Apple never used C++ for system interfaces. Even 'Carbon' on OSX still used Pascal Strings for the historycal toolbox calls. Over the years they added extra helpers and extra glue code to cater for C, but they stuck more or less to their API design until OSX arrived.
The funny bit is that even afterward they continued with some of their paradigms, the 'component' one for example, was still in used for years afterward for quicktime stuff, in OSX. Likewise the AppleEvents (also using "components") had the same paradigm and structures as it ever had (and migth still have!)
Some of my knowledge was from reading Computer Shopper and other computer magazines with a Mac section.
It was unthinkable for most business in Portugal to get a Mac, given that there was only one official importer, Interlog, with shops in Lisbon and Porto, nowhere else.
So I had the idea the C++ frameworks were introduced as the same time as the C, C++ SDK, not was system interfaces, but as wrappers on top of them.
That's an OSX kernel interface, not at all related to what they called 'classic' afterward...
Beside, despite the syntax sugar, it is NOWHERE near C++ at all. No constructors/destructor (if I remember correctly), no stack based objects, no exceptions, no virtual functions, no library, and you had to provision your classes with the right number of member function 'dummy' pointers to allow for expansion.
I've written enough OSX drivers back then to know that, quite frankly, C would have been a lot better.
> No constructors/destructor (if I remember correctly), no stack based objects, no exceptions, no virtual functions, no library, and you had to provision your classes with the right number of member function 'dummy' pointers to allow for expansion.
I think that you are fairly mistaken. IOKit drivers are written by inheriting from a base class and overloading virtual functions.
Fixed-length string types are appropriate for a lot of low-level systems programming and are a lot safer and secure than C's null-terminated (potentially unbounded) strings.
Null-terminated strings have caused problems not just for security, but have also added significant performance costs: https://queue.acm.org/detail.cfm?id=2010365
When I was making some Photoshop extensions [0], I slowly formed a general idea for how it works under the hood.
Initially I was really excited to write some code and give Photoshop some super powers. Especially I wanted to make a panel for automatic layout, and another for generative art.
But in the end I realised that Photoshop is just not designed to be extensible other than in some very specific ways (filters mostly).
I was also convinced that even Adobe probably has trouble putting in new features. This was surprising knowing that most of Photoshop's features don't interact with other features in any way (unlike a word processor or a 3D design suit like Maya). They can even develop the features in silos if they want to. But still, the extension APIs give some strong hints that the whole thing is tangled and unwieldy [1].
I did try to tame the thing [2] but the whole process was so unstable that after a few months of part time work, I just gave up. My conclusion was that if you want to have a design tool with super powers, you can't build it on top of Photoshop (or any of its competitors for that matter).
Anyway, this is where I shamelessly self-plug and say that I decided to start a company to build such a tool myself. Write to me [3] if you would like to learn more ;)
Good luck! I had some limited success in extending InDesign, but only in the documented way. The debugging process is a PITA for more complex things though.
Very large projects like Photoshop need to be able to be worked on by mostly independent teams to cut down on communication overhead. How things work under the hood can end up very different from the API and that's a very good thing.
I take it you meant independent teams? If so, I did acknowledge that. In fact, the whole interaction model in Photoshop allows it to be developed by independent teams that don't need to communicate much (this is at the expense of some major UX opportunities imho).
> How things work under the hood can end up very different from the API
Photoshop is a MacApp framework project. Apple Inc. designed and built MacApp and was promoting it to the developers with stage time at the Developer's Conference, and by never mentioning or acknowledging other frameworks.
Meanwhile, the Think Class Library (C) was designed and written by an individual .. a name given after the group behind Think-C / Lightspeed C got behind it.. The lone designer of the original Think Class Library is named Gregory H Dow.
It is a subjective statement, but I believe that more than four times the number of big name Macintosh apps were written in Think Class Library, in C, than Pascal MacApp from Apple.. There was a big interest in C for the Macintosh, while Apple repeatedly pushed their own Pascal, until Apple could control their own C compiler in the Macintosh Programmers Workshop (MPW). (edit- MPW C existed internally at Apple for a long while, but there was some disconnect between that and the developer tools marketing)
Gregory H Dow then designed and built a second generation framework using C++, called PowerPlant for Metrowerks CodeWarrior. PowerPlant was used for many, many more commercial applications after that. Pascal faded away on the Macintosh OS, even with the forceful support of Apple.
"There are only a few comments in the version 1.0 source code, most of which are associated with assembly language snippets. That said, the lack of comments is simply not an issue. This code is so literate, so easy to read, that comments might even have gotten in the way."
I also find comments do get in the way when they state the obvious, unless they add to the understanding of the code in some way it is best to leave them out IMHO.
This [0] is one of my all time favourite comments. Chris Wilson explains how the stroke miter limit works in cairio graphics through the medium of ascii art (and tidy maths).
What a lovely comment, explaining not just the why but also, in this case, the 'how' because its not obvious how to derive the code from the requirement so actually explaining how is worth it.
And a bunch of other places in the codebase from memory (cairo-path-stroke.c, cairo-path-stroke-polygon.c and cairo-path-stroke-tristrip.c). And there's a bug in there too because they're working in the wrong space so the checks are applied after outer transformations [0].... but it's still a great comment! :-)
I wish my IDE could interpret a special comment like
/* img "doc/uml.png" */
or something similar that it would use to display an image inline with my source code. There are so many times I've created a FSM diagram with graphviz + dot that I'd love to include inline.
JetBrains IDEs can do almost this using the PlantUML plugin (see my comment above): you just have to write a little graphml code to show the PNG. It's not inline with the source code, but in a separate pane. But a documentation generator (such as Sphinx) could show the image inline, with a little plugin.
I have done this with a documentation generator (doxygen) but I want it in Visual Studio. There is a plugin for it, but it's pretty buggy. It would be nice if it were a core capability.
A few times I had to do that in the past, I searched for some utility online that could convert my line diagram to ASCII art, and failed (so had to do it myself). Anyone knows if such a thing exists?
Not line-art to ASCII art, so not exactly what you're looking for, but PlantUML [1] generates software-related diagrams from ASCII text. There's a plug-in for JetBrains IDEs such as PyCharm that will render the diagram in a pane in the IDE: you can include the ASCII text in a comment in code. You can also generate documentation (I use Sphinx, for Python stuff) that includes diagrams generated from code.
Many people write the 'What' in comments, which is redundant, as we can see the code. Some document the 'How' but this can be deduced from the code except when using very clever algorithms.
Seldom people document the 'Why' which is the most useful some years in as no one knows the 'Why' any longer and might make the wrong decisions in refactoring - also if you don't know the 'Why' you often scratch your head with WTF, though the reasons were sound.
Spending copious time spelunking through commit logs to find the reason something exists would be better replaced by a succinct comment pointing the reader in the right direction.
Not to mention the misery one feels when they discover that oh, there is in fact no reason at all, or if there ever was, it is now truly lost.
The file I have open right now is 1500 lines with probably 400 revisions across numerous branches. Many of those revisions are going to be integrates across branches. How is my IDE supposed to know which of those revisions are the ones that I care about? And more to the point, how is it supposed to do that _quickly_ ?
> The file I have open right now is 1500 lines with probably 400 revisions across numerous branches.
I have a 1500LOC files with 1100 revisions right here, "git annotate" processes it in 2.356 seconds on a 2010 MBP, and PyCharm barely takes any longer (maybe 3s).
> How is my IDE supposed to know which of those revisions are the ones that I care about?
It does not care, it gives you a baseline annotation and if that's too recent you drill down into previous annotations.
What happens when your version control dies? Of course it shouldn't happen, but we all know about Murphy's Law.
Or as I've encountered on multiple occasions: your IT team is struggling to figure out why your network/version control/etc are extremely slow... 3 second queries are now taking 60 seconds... we will be rebooting servers a few times... and restoring backups... hopefully things are working better tomorrow!
Or what happens when you migrate to another versioning system? Where I was working migrated from MS SourceSafe years ago and a lot of comment history was lost in the conversion and apparently it wasn't worth the time/money to figure out why only half the comments made it over.
I'm sure there are more scenarios.
The point is: comments in plain-text source files can be invaluable.
VCS UI generally have a way to re-annotate the previous revision (even "git gui" does), if the code was changed since the original commit but the change is not what I'm concerned with.
More often than not, you only need a few jumps before reaching the change you're interested in.
My rule of thumb is that "why did we change this?" goes in a commit message, and "why is this like this?" goes in a comment. The latter is less often necessary than the former, but if the reader of the code would otherwise spend five minutes puzzling it out, or worse, would remove an apparently incorrect bit of code, it's worth the effort of the comment.
This seems wrong to me. If the "why" is not obvious from the code itself then it should be clearly explained in comments. It should not be detective work to figure out why some code is written in a non-obvious way. Otherwise people will be afraid to improve code.
> If the "why" is not obvious from the code itself then it should be clearly explained in comments.
Except you can't trust comments. Even assuming the comments were relevant at one point, they don't get updated as the code changes, and as new code gets added they can drift away from the original code they were supposed to explain.
Commit messages don't drift, they're immutably bound to the changes the commit performs.
And because they're not bloating up the code it's much easier to be clear and verbose in commit messages than it is in comments, the author can easily explain things which seem obvious to them but may not be so for a reader in a few months or years without lowering the overall readability of the code which is not the case of comments.
People will criticise comments as "obvious" and "unnecessary", I've yet to have a colleague or contributor criticise a detailed commit message for any other reason than the commit message being wrong.
> It should not be detective work to figure out why some code is written in a non-obvious way.
Get proper tools. Browsing annotations and reading commit messages is not "detective work", it's absolutely straightforward and means you're actually using your VCS as something more than a glorified tarballs directory.
> Browsing annotations and reading commit messages is not "detective work", it's absolutely straightforward and means you're actually using your VCS as something more than a glorified tarballs directory.
That works until the VCS history is lost. For an on-topic example, this usually happens when a formerly closed-source code is released to the public: only the final state of the source code is released, with none of the VCS history. This can also happen when changing to another VCS; often the history is kept only on the former VCS. Some projects have gone through several of these events; I challenge you to find early StarOffice history when trying to understand why some piece of LibreOffice code is written that way.
And there's also the tendency in some places to reference bug tracker issues in commit messages. Said bug trackers tend to have an even shorter life than the VCS. "Fixes issue #1234" is useless when issue 1234 was on the old bug tracker, and both it and the new bug tracker were lost when the company was acquired and migrated to a third bug tracker.
Comments may drift if the code is not maintained properly, but commit messages will also drift. The more you edit/move/refactor, the harder it will be to connect the code in question with the relevant commit message. I don't believe good tools will save you here - the more changes happen to the code, the harder it will be to find the relevant commit message which explains the code.
Comments you can at least make an effort to keep up to date. Especially if have a policy that all comments should actually be relevant.
Comments shouldn't be obvious and unnecessary. They should explain things which cannot be inferred from the code itself, eg,
// have to call reset() twice here because the XZW API has a bug which restarts the server if reset is not called twice. See ...
As someone just learning coding I'm finding myself drifting from what to why a great deal more in my own code, if only because I can read the what a bit easier now than in the past .... and the why concerns me more now.
"Doing this here because x,y,z, maybe better to do a,b,c later on but would have to do d,e,f" and so on.
I do worry a bit about doing it sometimes because in its own way it admits some sort of shortcomings or such... but still I do it.
When reading other people's code the why is often the question I'm wondering but rarely anyone says.
Sometimes the code is perfectly easy to understand, but the reasons for the code may still be a complete mystery.
e.g. This is the kind of thing I find myself wondering very often while browsing our company's codebase:
"Why are we only appending the jurisdiction on this partner and not the others? Is this intentional or an oversight? Maybe the unit tests will tell me more..."
I think unit testing is underrated in this area - no documentation can be better than some additional unit tests that show exactly how to use the code in the real world.
Even if it blends over a little into integration testing, the additional test work (and time to run tests) really pays off in terms of understanding and reusing an old project.
I think it would be better in this case to add a flag to the vendor that indicates it needs the jurisdiction added and the flag should be named for the reason.
Besides being cleaner, when you add the next vendor, you won’t be searching all over the codebase for if(vendor)’s to add new special cases.
Sure but the only comments I really find like that are function-level comments where there's a rule that every function must have a comment, so people just duplicate the function name as a comment.
Especially in Java code - look at Android for example. Loads of "setFoo() // sets foo" nonsense.
But in general I think people use the "code shouldn't need comments" thing mostly as an excuse to be lazy and not write comments.
In my previous job my team leader had commented the "open" file statement (pretty bloody obvious), but didn't bother to comment the seek(8) command after to explain what the hell was happening at byte 8. It was like an example of how not to comment code.
To play devil's advocate (and be really nitpicky), it doesn't say "return an error" but "return on error" which does contain some useful information: That this method doesn't have any kind of sophisticated error handling but simply aborts on error.
I don't think that's very useful - you see that it doesn't have any kind of sophisticated error handling simply by looking at the method's source code (which you need to do in order to see the comment).
Because it re-states what the code is saying, which makes it two things you wind up reading to absorb one fact. It also create two update points if that code needs be changed.
I prefer that authors spend time to make sure the code is clear about what is happening, and use the comments to tell me why it's happening, and that too only when it's not obvious, or counterintuitive, or there for a special requirement.
Just wanted to point out that comments are not to be confused with documentation as comments, like Javadoc, Godoc strings. These need to be complete and verbose, especially if they code is a library or public.
It just seem quite petty thing to get angry about. Reading those two things takes less then additional half second and on update you can delete the comment if you don't like it. I have seen project behind schedule and it was never because of tiny things like this.
Also, you are likely to find out that comments like this are often result of people just auto-writing them at the moment e.g. writing it was faster then reflect on it. Which is not necessary good thing, but it is saving time in expense of little bit less effective code.
The comment is more a symptom than a cause of problems. If useless comments are added, they’re often an excuse for lazy code, in that it allows a way to say I can’t be bothered to write this better, I’ll just add a comment instead.
Regarding time saving, I seriously doubt that code quality is proportional to time spent typing. Instead of typing two things I would prefer the author spent more time thinking and typing just one thing.
I had a coworker who would write comments like that, it wouldn’t exactly anger me, but it was a bit strange. Why would you go through the trouble of writting a comment like that, but not leave any hints as to what the devil the 50 lines of spagetti code above does? Because that would always be the case: some obvious comment, and nothing for the horribel or complicated stuff.
I've only ever done it with stackoverflow because:
1. I think they'll be around for a long time, and
2. they have that policy of explaining answers on the page rather than just referring to external links (yes - irony not lost on me).
Any future dev looking at my 3-line validateEmailAddress() function who wants to learn about it can go to the site and read about it, and maybe find there's a newer status quo way of validating email addresses.
I suppose you could argue that I should copy the explanation of what the regexp does into the comment but really it's just asking for it to become redundant and if someone really wants to get into it they'll probably benefit from reading the entire up-to-date discussion.
Admittedly this is a very specific kind of situation, but this doesn't feel unreasonable or unprofessional to me (bearing in mind that I'm not generally a copy-paster, I prefer to write my own code but in the case of things like email validation I think it's more professional to find an established algorithm than it is to roll your own, unless you have a specific reason for doing so).
If I cut and paste some code, even if not from Stack Overflow I'll document it. Partly because I want to attribute the code correctly, but also so that someone later can go back to the original source. The original source may have additional documentation, explanations, assumptions, etc. that people in the future may want to know. Plus it helps to know that this block of code is an external cut and paste and that modifying it should be done more carefully than other code.
This tends to only be for tricky or time-consuming algorithms. RGB to HSL/HSV, is point in a polygon, hash function, etc. Those point-in-polygon functions in particular can be densely optimized to the point that it is not obvious a) how it works, b) what the original algorithm was.
The same person that would be able to write readable understandable comment about that horrible and complicated stuff would be also able to refactor it and make it clearer.
That is why. The person left comment on literally one thing in that code he is able to explain.
I sigh often when I read (bad) code. I don't get angry. That, to me, seems to be a bit much. I'm not trying to be judgemental, I just truly don't understand why anger.
If for no other reason, I have gotten in the habit of doc'ing my fields, properties and functions with javadoc/xmldoc just because my IDE has such good support for popup documentation when I hover over an identifier.
Saves a lot of time vs jumping to the definition to see what it does
> I also find comments do get in the way when they state the obvious, unless they add to the understanding of the code in some way it is best to leave them out IMHO.
I'm basically of the same persuasion. Multi-line comments don't belong in the body of code, single line comments are sometimes necessary but I strive to eliminate them through more self explanatory code (although I will prefer a one line comment over unwieldy long variable names which often outweigh their worth by reducing legibility, especially when talking math).
Beyond that the only good reason I've found for large multi-line comments are when even given the best implementation: when there are subtle details in the concept you are implementing that can appear as being redundant or superfluous in the implementation, it is very important to comment to remind yourself not to break it when next trying to digest it again.
Whatever are your opinions on comments, this file breaks them all with a good reason behind each.
And it's still implementing a standard which should be fairly well defined, but... isn't.
So yes, multiple comments explaining code, explaining behaviour, reproducing relevant parts of the standard, and many others. The code both requires them to know wtf is going on, and at the same time is pretty close to simple as far as multithreded SIP can go.
Really cool bit of history! I keep coming back to the idea that studying Photoshop is a great way for learning about how to engineer desktop applications and GUIs. Discovering Sean Parent's _Better Code_ lecture series has been a dream come true!
It would be great if someone did a high level diagram of the source code to have a taste of the overall architecture without having to dig into the code, specially for those who, like me, don't speak Pascal.
What is weird is that it has the exact same acknowledgements as the Eudora article.
Including:
> Thanks to Steve Dorner, Jeff Beckley, and John Noerenberg for their encouragement and participation in this multiyear odyssey to release the code, and for creating Eudora in the first place. You should be very proud of what you did.
edit: I mean it is weird because this was released in 2013 and it has acknowledgements of a another current release. Seems like a template change or something similar.
Strangely, i find the UI screenshots in the article to be clearer than some modern UIs (eg Gimp).
Why? I'm not sure, but maybe: (1) there is less functionality, so less stuff on the screen. (2) Black and white means that the icons are all high contrast. (3) the icon shapes are simpler (compared to Gimp's toolbar icons). (4) the borders on the forms are higher contrast; in Gimp's color picker form, for example, the boxes surrounding places where you can type stuff in appear to be gray, whereas they are black in these early Photoshop pictures.
My very first commercial app, (A pharmacy point of sale system in 1986), was written in Turbo Pascal 1.x. Nice to hear I share some similarities to the early days with a software system that is essentially a household name today.
NB: I recently downloaded a copy of Turbo Pascal 1.x and been messing around with it in a DOS box on the Win7 laptop again. No idea where that source code I wrote 40 odd years ago is now, and no way of getting it off any sort of medium it might have been on either!
I used Turbo Pascal at work in an electronics R&D lab - my biggest achievements were a staff time management program and one that 'drove' a Stag EPROM/PAL/device programmer, so a dev could just point to a device firmware text file in one of a number of formats (Intel Hex etc.) and it would be parsed for errors, sent to the programmer, the right device selected, programmed and verified - this saved a lot of fiddling with the programmer itself.
If someone's going for TP 1.0 instead of even newer TP versions, I guess nostalgia plays a big role. Up until TP 4 or 5, it didn't have things we take granted now (or heck, in the 90s). Like Units.
I bet there are some Adobe programmers reading HN; I don't think some rough estimates would be a violation of their NDA and I'd be curious to hear them, too.
You'd be surprised. While the code base has expanded on a geometric scale since 1.0 (we're now clocking in around 4-5M LOC), there are still routines that have survived all these years. While the implementation details have changed radically, a lot of the old architectural vestiges are going strong.
The more things have changed, the more they've stayed the same.
As a long time fan of Photoshop and the Knoll brothers, this is amazing.. Going to figure out a way to pay homage to this code base. Any recommendations are welcome.
I know, isn't the future great? To read a web page, just enter an url in chrome, once it has loaded go ahead and click "yes you may track me" then finally "reader mode" and the world is yours.
Safer yes (pointers were less ubiquitous and much safer) and it had other type-safety features like subranges (~refinement types), sets, type-safe enums (though not proper sum types), …
It had a raft of issues though, chiefly a lack of dynamically sized arrays and strings ("pascal strings" combine a static maximum size of no more than 255 with a single-byte actual-length prefix, IIRC all on the stack) or rudimentary dependent typing making generic processing of these extremely difficult[0], no ability to break (out of loops) or early-return, undefined order of evaluation of boolean expressions (and/or), … Kernighan actually wrote an entire "Why Pascal is Not My Favorite Programming Language" essay in '81
More modern evolution have related/fixed some of these since, not necessarily in great ways (freepascal's "String" can use one of 3 different string implementations depending on compiler settings & active pragmas[1], and that's a small subsets of the actual stable of string implementation available)
[0] outside of builtins (which weren't limited to the language's "user" semantics much like Go today) you'd need an implementation for each static length of array/string
Younger programmers usually believe that the best thing is the one that allows you to effortlessly do anything you want. Only later they start to see that some limitations and hurdles might actually be beneficial.
I was careful to qualify "some" programmers and avoid generalization. Some younger programmers like this, some others like that. The difference is in ancient times I don't remember so many people liking discipline languages.
Funny thing is Pascal was seen as the constraints language at that time, while C was more liberal. Different times.
In Pascal evolution (Delphi and Lazarus), a lot of automatic memory management, including proper gc but most often simple referemce count, has been incorporated over the years. Also the memory for a lot of standard library resources is managed hierarchically.
In short: You still need to know how to deal with memory manually, but most of the time you don't.
Pretty much all non-trivial Pascal code from that era included inline assembly. Either for performance or for accessing some low level functionality. That's the ultimately gun for shooting yourself.
That's a weirdly dismissive tone given that Pascal was a massively popular and commercial language before C++ really took dominance. Also worth remembering that lot of early Windows software was also written in Pascal.
Yes, after using the compilers at my technical school classes, Turbo Pascal for Windows v1.5 was the very first I saved money for, buying it with student discount, about 150 euros on today's money.
Today's Pascal is called Oberon, which is also designed by Niklaus Wirth. Oberon is a pure structured language* with support for modular and object-oriented programming. In a sense Oberon represents more than 50 years of refinement starting from Algol-60 via Pascal and Modula-2. The language report has shrunk to about 16 A4 pages. I would say that Oberon is about as close as you get to Dijkstra's idea of a humble programming language. Among the compilers I can recommend OBNC (http://miasap.se/obnc).
*in a pure structured language each statement sequence is either fully executed or not executed (there are no goto-like statements)
Seems you never had the pleasure of using Delphi then. It was by far the most pleasurable experience I've ever had writing code - in generations to come it will be remembered as the peak of coding ease and efficiency.
The feature matrix is 37 pages long. There's all sorts of stuff in there. For examples: UML is Delphi-only. Some tools for debugging multi-threaded programs aren't in the "starter" flavours. Nor is the ability to debug any process; nor to run until function return; nor for targetting Win64.
Don't know what it's like nowadays; I suspect it's fallen from it's former glory.
Nevertheless, if you consider that $1500 is less than a week's salary for a programmer, you just need to work out whether a $1500 tool is going to save you a week of programming time. Back in the day I was using the heavily discounted education edition as I was a poor student; many products still have something like this (for example JetBrains' suite is free for students and OSS projects). But the price itself isn't really that much in the scheme of things.
Don't know what it's like nowadays; I suspect it's fallen from it's former glory.
Nevertheless, if you consider that $1500 is less than a week's salary for a programmer, you just need to work out whether a $1500 tool is going to save you a week of programming time. Back in the day I was using the heavily discounted education edition as I was a poor student; many products still have something like this (for example JetBrains' suite is free for students and OSS projects). But the price itself isn't really that much in the scheme of things.
I had the opposite experience as I love(d) Pascal but I never did warm to Delphi quite as much. Borland's IDEs were amazing though - for a long time I preferred Delphi's IDE over Visual Studio. And as for Turbo Pascal; even now I look upon that as one of the greatest IDEs ever released.
Turbo Vision (and the entire TP6 package) was an amazing environment. I think TurboVision in particular was helped by the fact it was Borland's productized version of the framework they used for the IDE itself.
As a point of contrast, Microsoft shipped a similar sort of TUI framework with its Microsoft BASIC 'Professional Development System'. The PDS was Microsoft's original BASIC compiler (BASCOM) that'd had been integrated with the QuickBASIC (4.5?) IDE. In addition to the compiler, the PDS had direct ISAM support and a few other higher end features targeted at people writing enterprise line of business apps.
The TUI framework was their attempt to allow developers using the PDS to write applications in the same visual style as the IDE itself. However, due to the fact that the underlying BASIC language had neither objects nor function pointers, there was no way to implement any sort of callback model in the API. The net result was a framework that required huge amounts of boilerplate code to get anything done and was virtually impossible to use. So, while TurboVision was a bit harder to wrap your head around, it at least had the benefit of being actually commercially useful once you did.
This is a big part of why I tend to strongly prefer development tools that are actually used by their developers. The incentives are aligned.
I couldn't agree more. This is also the reason some programming languages eventually write their compiler in their own language. The phrase 'eating your own dog food' often gets used to describe the process.
That phrase came to my mind as I was writing. My first exposure to the term came in the context of Dave Cutler's requirement that Windows NT developers 'eat their own dogfood'... so it's a little ironic to me that one of my big complaints about the MS BASIC PDS is that they seemingly didn't.
The goal was to move the COBOL, Fortran, JOVIAL, and assembly language programmers in the company to Pascal. My department built the compiler and because I had taught programming in grad school I ended up being one of the evangelists that taught innumerable one week crash courses to the company’s engineers, business programmers, and software people. It was actually a really fun job and I met a large number of people that worked in the company’s various branches.
I discovered that there is no better way to learn every facet of something than to teach it, more than once, to grumpy students sceptical of the new fangled concepts.
A couple years later I was back in grad school again, still programming in Pascal. It was interesting to realize that most fellow PhD students that go straight through aren’t really good programmers and don’t understand software engineering. (This was a long time ago so it’s probably very different now. CS students own their own computers and are able to program much more than we could.)
I remember Pascal fondly like some friends remember their first car. Like cars though, the new languages are really much better (safer, faster, more reliable, etc).