How about creating a better product worth paying for? Eclipse was and is not exactly universally well liked.
> I'll be taking up Android again soon. Every time I try I find Eclipse [...]
Moreover, the next version of Android Studio will include C/C++ support based on JetBrains' CLion that will cover development with the NDK.
I actually like IDEs quite a lot, what I don't like are IDEs like netbeans, eclipse, visual studio or xcode.
I loved codewarrior absolutely to death, I was heavily into thinkc, the first development tool I ever bought with my own money was lightspeedc which fit on a floppy.
Also Ant? Gradle and Maven are significantly more versatile and better supported for Android development (with Gradle being a first-class citizen now) and I honestly don't advise you to torture yourself with Ant.
* Local file history separate from git history, with fantastic navigation/diffing support. It's an undo/redo button on steroids that keeps months of history.
* When searching for something, you get a full, scrollable preview of the surrounding context of each matching result. Results are organized within the file hiercharchy and are collapsible.
* Remote debugging. Drop an egg in your remote source and go.
* Annotated code, line by line, with the relevant commit and comment next to it.
* Auto-linting to catch common mistakes when coding. It'll immediately underline suspicious-looking code and tell you why. Lots of customizable style-linters to keep your code pristine looking.
Obviously Python is a very dynamic language as well, so PyCharm has most of the same strong points. I'd been using the Community edition for a while, but finally picked up a Commercial license a couple weeks ago. In this case, the feature that finally convinced me to pay for it was no-kidding Remote Debugging. Drop a .egg next to your deployed app and stick an extra import and connection call into your code, and then PyCharm's debugger (which is another great feature in and of itself) just flat-out works over SSH.
Ultimately, both these tools have saved me enough time and effort to pay for themselves repeatedly.
These days the target market for WYSIWYG HTML editors would be probably be using an online CMS like WP - which actually has a WYSIWYG HTML editor. Failing that, Microsoft Word actually had nice HTML output last time I checked.
> HTML WYSIWYG wouldn't have a developer as a main target, most developers would prefer editing the html directly
WYSIWYG always had a split-screen view which allows one to work in design and code mode at the same time (and see the changes in the other mode). It's very efficient.
> Microsoft Word actually had nice HTML output
MS Word 2000-2013 web page export is based on the old Frontpage engine, and generates invalid HTML4 and converts vector graphics and word art to VML, the proprietary predecessor of SVG, that IE10+ don't even support anymore (except with legacy quirks mode mode).
> online CMS like WP
The contentEditiable HTML4 API that CMS use is sadly completely broken in every browser in a different way - the experience and the code output quality is far worse than Frontpage ever was - and Dreamweaver is better in every aspect.
There is a reason why we have BBCode and Markdown, because contentEditiable isn't that great at the moment. And as several browser vendors have a competitive advantage with their web based Office services (Office365 Web Word, iCloud.com Pages, Google Docs), they will never fix the various bugs of contentEditiable in IE/Safari/Chrome.
Also I have no idea where you got the idea that WYSIWYG always had a split-screen view unless there is some product that is named WYSIYG, for me it has always meant What You See Is What You Get and the entire idea was that you have to mess with how it worked under the hood. As implemented in e.g word (or open office) you don't even have the option of doing that. Are you arguing that Word is not WYSIWYG?
Finally take a look at the the output from the WP HTML editor, I have no idea why you bring up the contentEditable API, but either you are misinformed or wordpress use something else because the output is pretty clean and it can convert between that and free HTML entry.
Wordpress uses the TinyMCE rich text editor which uses the contentEditable/designMode API, TinyMCE is one of many such editors that try to fix many of the glitches: https://en.support.wordpress.com/editors/ , http://www.tinymce.com/ , https://en.wikipedia.org/wiki/Online_rich-text_editor
And just because some have never have seen a good HTML WYSIWYG editor doesn't mean it's a bad rapid development method and should not be integrated in a modern IDE.
Note that the generated HTML5 source code
- contains VML code
- will not render correctly in any modern browser
- If you want to be able to render it in IE11, you have to press F12 (Developer tools)
and change from "Edge" to compatibility to version 7 (or even 5) of the IE engine.
Then it will be drawn "correctly".
To clarify the generated code is invalid HTML 4.01 (20 errors according to W3C HTML4 validator https://validator.w3.org/check ) and the first view lines look like this:
<meta http-equiv=Content-Type content="text/html;
<meta name=ProgId content=Word.Document>
<meta name=Generator content="Microsoft Word 14">
<meta name=Originator content="Microsoft Word 14">
It's very efficient and less headache. WebStorm could add webkit and sync the code view like Dreamweaver. JetBrain's IDE is Java based and integrating a C++ based WYSIWYG view might be more complicated. And all the Java based browsers have been discontinued like https://en.wikipedia.org/wiki/HotJava .
Most important things are:
1) Static errors analyzing. Saves a lot of time. Really, hundreds hours of time.
2) Code completion (VS users call it "IntelliSense").
3) Refactoring tools. You can change name of method in one place and IDE will change all usages of this method everywhere in code. It's extremely important.
- their method calls are excessively chained, which means a failure to analyse step 2 of the chain blocks analysis of all further chained calls
- their phpDoc either doesn't exist or is actively misleading at times
- they rely on magic methods all throughout the codebase without properly annotating them
or even better, pry-byebug!
It's amazing that in 2015 we have people that use Emacs, Vim, or other editors that don't have intelligent, realtime code analysis and we consider that "hardcore, programmer machismo".
Someone not getting their job done is a problem. Someone using vim as their editor is not. I could understand giving that feedback ("try an IDE") but telling them their job is contingent on it is probably missing the point.
Most of the people I work with seem to use vim/emacs, and it doesn't appear to be for "machismo" reasons. It's likely just what they're used to.
My trick (not really a trick) for efficiency is some simple key bindings to navigate between windows in Screen.
And if I really need power I'll run tabs inside of individual Vim instances within each Screen window.
I really think IDE dependence is one of the things that can prevent agility within a team. As soon as you introduce a product into your stack that a person's IDE doesn't have deep integration with, or doesn't play nicely with, all that productivity that person gained by learning that IDE is all of a sudden gone.
The best programmers I work with use emacs or vim and are good enough to keep up even though most others use intelliJ ides, but it is a handicap, they are just good enough to overcome it. I have no doubt they would be even better in a good IDE.
Do you really think the best programmers spend enough time refactoring for the use of an IDE to help?
They design it well the first time. They refactor is to a better design as needed as new requirements arise.
> Is refactoring a bad thing now?
It isn't, he just intended that the best programmers refactor less (since they are the best they must get things right or better predict the future more often then the others).
And yes I know those I work with do, because we work on the same codebase and I see their commits.
But then, that's probably why those IDEs ship with sophisticated debuggers, right? ☺
For C/C++ specifically, the state of code analysis in emacs has greatly improved due to clang. For example, I get full auto-complete (including for variables declared 'auto') in emacs with irony-mode  for code completion, flycheck-irony  for on-the-fly syntax checking, and I get jump to definition/find references/etc via clang-ctags . I assume vim has similar extensions.
 - https://github.com/Sarcasm/irony-mode
 - https://github.com/Sarcasm/flycheck-irony
 - https://github.com/drothlis/clang-ctags
I use PhpStorm, cLion and PyCharm, and im happy to pay for them, the editor itself is very high quality and having common tools across all the 3 languages i use is great. If only they did a "golang" IDE.
Oh, that's amusing, and rather misinformed. Emacs in particular, when programming Common Lisp or Clojure, has a really impressive environment that Java IDEs can only aspire to (in part, that's an unfair comparison, because CL and Clojure are well suited to interactive programming and Java isn't).
I would be more careful with strong words like "professional negligence" or "hardcore, programmer machismo".
What is amazing is that in 2015 there is people out there that believes Vim or Emacs have no intelligent, realtime analysis.
In fact, in year 2015 Emacs and Vim have the best real time, intelligent analysis out there for those that know how to use them and are trained on them.
You can use elisp inside emacs and automate EVERYTHING running circles around anything commercial.
I have a company making software and I know. With the proper training those tools are incredible.
I develop Cursive, an IDE based on IntelliJ for Clojure code. Its completion and navigation are categorically better than Emacs's, even for a language that supposedly has one of the best levels of support under Emacs.
To say that Emacs is automatically better than everything else is pure Stockholm syndrome.
I'm not saying there isn't any, and certainly I've used better refactoring tools than https://github.com/clojure-emacs/clj-refactor.el in other languages. But what on earth is wrong with the navigation? The main advantage I've seen form a colleague is the highlighting automatically of usages, but I'm pretty sure I can set that up in Emacs too.
But you're right that the tone of my comment wasn't great, and in general getting involved in these discussions is just a bad idea. Every so often I see a comment that makes it difficult to resist, but I always regret it.
To answer this specific question - Cursive's Java interop support is much better than anything else I'm aware of. It implements Clojure's type inference in the editor, so method calls are accurately resolved to the right method based on the number and types of the parameters, and completion takes this into account so that you can explore Java APIs almost as well as with Java in IntelliJ. Cross-language navigation and Find Usages works - in Cursive you can navigate from RT.var() calls to the Clojure code, and Find Usages from Clojure will find the RT.var() calls. This also works for other JVM languages - there are quite a few Cursive users with mixed Clojure/Scala codebases, and this all works there.
You can search for all usages of a particular keyword, and it will also find all local bindings destructured from it using :keys. Namespaces will be auto-required during completion based on examples elsewhere in your project, not hard-coded config. This works in the REPL too - you can type str/tr and when it's completed into str/trim Cursive will automatically (require '[clojure.string :as str]) in your REPL.
One other nice thing is that since Cursive works from source, pretty much everything that works for Clojure works for CLJS too.
There's lots more along the same lines, hopefully this gives you an idea. Again, there's nothing wrong per se with Emacs' navigation, but IntelliJ just provides a much more sophisticated infrastructure for it. Obviously elisp is Turing-complete so all this could in theory be implemented but it's much harder, as Yegge describes in the article I linked in my other comment. The clj-refactor guys are doing a great job but the lack of a good indexing infrastructure is going to mean that a lot of this functionality is hard or impossible to implement.
Emacs is a great choice for a lot of people and if you're willing to invest the time to trick it out and maintain it, especially for Clojure you can get a really nice environment. But probably the feature that most people like about Cursive is that it just works out of the box, and stays working with no messing around and no development of your editor required. And WRT my original comment - to say that Emacs has the most sophisticated runtime analysis is just wrong.
I feel the same.
Having been brought up with all Borland products, Smalltalk Visual Works and Oberon, I really cannot grasp they keep themselves in a UNIX V7 world, instead of a Xerox PARC one.
And I did use Emacs several years, while deeply missing Borland tooling, as that was the best thing one could look for in UNIX back then (VIM did not exist just VI).
However in 2015, there are so many nice IDEs also available for UNIX...
Emacs is more extensible, and more easily extensible, than Eclipse.
Yes, an IDE is pretty nice for a particular kind of horrible enterprise coding: the sort where no-one understands the system, where everything's horribly over-architected and where, yes, ultra-fast auto-completion is necessary to get anything done. But…why live like that? There're interesting problems and fun environments where one doesn't need to live in that kind of hell.
emacs can handle that, but you have to start asking yourself if it even makes sense to do. A lot of times, those IDEs are just managing spurious complexity, not actually helping build something great.
One can look at it the other way. If your language needs a million lines of IDE code to function and for programmers to be productive with it, maybe it is time to sit down and tell that language to "get lost".
I think it's mostly that the current state of "intelligent, realtime code analysis" is so terrible that it can be replaced with "rename" and "find references of". I.e. ctags.
Yet some people hate it because it's not "modern", mostly meaning you can't control it with the mouse.
Try that with rename and you will quickly run into trouble, unless you have only one class that has a method called length.
Refactoring is still in its infancy. A), it's not entirely possible on C/C++ because of macros. B), they are serialized extremely poorly to diffs, so any boon from refactoring is sidestepped by SCM. C), most of the refactoring done by programmers cannot be done automagically by the IDE (e.g. type signature changes are painful as hell). D), 95% of changes are supported by ctags-like functionality; i.e., a basic identifier index, and re-compiling to fix the bugs.
Given this, I'm pretty sure IDEs have made a lot of progress towards renaming and looking up references really well, but not a lot of progress towards actually being useful.
That said, I do find that when the language supports it, a precise tool to find usages and/or rename is worth its wait in gold. But this is hardly anything near "intelligent, realtime code analysis". Pretty sure that's referring to typeahead, which again is a step removed from a source code index and can be faked by using a regular expression to extract identifiers to suggest.
Yup, until merges/conflicts. :) Semantic diffs with refactoring operations would solve this.
> (e.g. type signature changes are painful as hell)
Resharper handles this very well.
For the life of me, I've never been able to get any advanced code stuff to work. Auto-complete etc.
In addition to code completion, having documentation available to you as you type, and easily navigating code (e.g. jump to definition), you also have all the usual emacs goodies (unlimited undo/redo, easily defining keyboard macros, writing functions for emacs and binding them to keys).
After 9 years of using emacs, I still don't find it unambiguously better than Visual Studio for working with C/C++. The text editing is a lot better in emacs, but the code browsing/completion is not - and for large, multi-person projects I've always found the code browsing/completion functionality more important. You can always load files into emacs for one-off operations if you need to do something in particular.
I never needed it in Python, because Python libraries are so clean and orthogonal.
I think you hit the nail on the head, IDE's are an excellent choice for when you don't know what you're doing. :p
Seriously though, auto popups of docs are very helpful for huge libraries and new codebases. I wrote a little script for that.
These days there's also things like eclim. I don't know it seems like vim style editing is still the best way to get around the codebase and actually work with the text but IDEs have great integration to work with the code. So I think the optimal solution is a "best of both worlds" approach which means a vim-plugin for the IDE or an IDEfeatures-plugin for vim. I see the former setup a lot.
and my wild guess is that emacs actually has great code integration for Lisps. I don't know what the state of the art Lisp IDE is but I'd be shocked if it was better than emacs.
For me, I've used both IDE's and plain ordinary text editors for close to 30 years (the first IDE being QuickBasic 4), and tend to find that IDE's get in the way more often than not. My personal software designs tend to use single components for single functions, so the additional complexities of an IDE's second compiler, second build tool, second VCS front end are less than appealing to me. That said, I've gotten a lot of use in the last 24 hours out of code refactoring the the IntelliJ debugger, so I think the honest answer is to use both approaches if you like.
You found me out! My preferred environment set up is nothing more than machismo directed at you, a complete stranger. If only I'd had the humility to try an IDE, I could have been enlightened
My eyes hurt from rolling them so hard at your comment.
Emacs, Slime and Common Lisp is something that I would give an exception to. The major productivity I give to that combo is the ability to change the code on the fly while debugging, especially Common Lisp's continue from exception mechanism.
Smalltalk environments have had this ability forever. I just imagine how much more productive I could be if in Visual Studio I could edit-n-continue on everything...even if my code wasn't in a successful compilation state.
Java has this and Jrebel has this on steroids, but I want to be able to edit-n-continue even if my compilation state is broken. Just throw an exception when it fails because of bad state.
like 'zero'. ;-) Use Clozure CL: http://ccl.clozure.com
> Clozure CL also provides a mechanism for defining callbacks: lisp functions which can be called from foreign code.
> Clozure CL provides facilities which enable multiple threads of execution
Runs on Linux / Windows / Mac OS X and others.
Who in 2015 pays $2,147 to be able to develop software? Note that you'll need to spend even more to target mobile and there's no Linux solution yet either.
Delphi users refuse to accept it (or that mobile or web are here to stay, or that there are better VCS systems than subversion, which is the only VCS that Delphi's IDE fully supports) but the days of the expensive, proprietary IDE are indeed dead. MS releasing a free VS Community Edition was just the final nail in the coffin.
The mentality of "good-enough and not even that" is pervasive. When IDEs market is dead, this means the professional pride of the industry as a whole is nonexistent.
(Eclipse does not even come close to amateur level, let alone professional. Vim and emacs which you have to configure yourself to come close to a decent level of proficiency don't count either - I'm using all three)
JetBrains, stick to what you do best.
The packages were structured that some really nice features were bundled with stuff many likely would never use and the upcharge to each tier wasn't small.
If you are buying for yourself, then it is 199 euro which is less than half the price (although that doesn't include VAT).
A commercial new license is $500.
Not sure where you get the big price drop from ;) And yes, it is easily worth $500.
It goes on and on. When I considered it for a project the cost came out to almost $5000! Instead we went full open source (including JetBrains' open source version of PyCharm) and got lots of functionality one couldn't find at any price on the Delphi platform - which incidentally is, just now in 2015, beginning to set up package management. Unfortunately it will be tightly controlled by Embarcadero and they've issued all sorts of warnings about not approving code that replaces functionality in Delphi (in short, they're afraid of competition). The community has no control over the language or the product and the diehards that are left treat it like a religion. They have something called "MVPs" who actually sign a contract to never disparage the language or Embarcdero! In exchange for selling their integrity they get free copies of the product. Completely coincidentally, they'll tell you that the product isn't overpriced. They'll also tell you it's used all over but no one talks about it because "it's their secret weapon". Delphi's product manager told me that he fully believes that "Delphi has had a greater impact on the business world than Python ever has". You not only have to pay all that money, you have to deal with the Scientology of programming languages. :-(
So no, you pay far more than for IntelliJ, you have no control over the product, you have no working roadmap, you don't even have RELEASE DATES for new versions. It's a whole other world over there than what the rest of us are used to.
I think, business of software as a downloadable application is dead. See what Microsoft is doing.
* They lost Anders Hejlsberg
* Free compilers got aceptable.
* Lack of back compatibility, new VCL components from one version weren't compatible with older ones, every new versión require to biy the new components.
* To expensive.
* No new books to learn Delphi, they are to old.
First, at the time Borland (or, briefly, Inprise) tried to move into the "enterprise" market, with expensive, nebulously defined "middleware" products like Midas. This is why they aggressively pursued failed tech like CORBA. It was a disaster, and had nothing to offer those who wanted to get stuff done. They also clung to BDE despite the fact that ODBC was much more mature and fast. Serious users used some ODBC components that someone built.
They forgot that Delphi was a tool to build great apps in, and didn't figure out that Delphi was great on the back end. It was all about GUI apps, but it turns out that it was incredibly productive for non-visual apps, too. With a better strategy, Delphi might have competed with Java, but they missed a lot of what made Java great in the beginning. Support for web development, even sockets, was near-non-existent.
One problem with backend apps in Delphi was that it was painful to work with C libraries. One huge thing they could have done was to make it trivial to generate C bindings, but the best they had was a rather terrible tool to convert header files to Pascal files. (I wrote a much better tool that understood macros and could even translate complex C++ headers like Microsoft's MAPI, but I'm sure Borland could have done even better.)
Delphi also tried to pursue COM (in addition to CORBA) as a distributed programming glue. Delphi 4 even had a typelib editor (which was buggy and horrible), which you needed to use to get any performance at all; the automagic "OLE Automation" support built into the language was awesome but really slow. It turns out that distributed programming with COM was not very mature, and trying to use it with Delphi was painful (though possible).
Borland also got distracted by Linux (not a good fit for a closed-source company) and by C++ (C++Builder could never be as successful as Delphi since they didn't control the language and had to extend the C++ compiler with custom directives to make it work the way Delphi did).
In the end, it's best summarized as: Lost focus, didn't realize what they had, pursued the wrong markets. Being stuck with a proprietary language didn't help, of course.
The typelib editor, for example, was atrocious in Delphi 4 (maybe it got better in later versions), and if you wanted to use a COM-based library (MAPI, TAPI, OLE DB, etc.) you couldn't do it without the headers. Since so much of COM came from Microsoft, it felt like fighting a losing battle against the official way of doing COM programming (Visual C++, at the time). Ironically COM programming seemed much worse in VC++, given the lack of language integration.
Funny thing is that now, around 20 years later we have .NET going fully AOT with .NET Native (kind of Delphi experience) and C++/CX with XAML (kind of C++ Builder experience).
But the web started developing, OS X started to win back the crowd and Linux showed potential (I stopped doing Delphi development when I moved to a linux box).
Since then they have added a lot of the functionality to do cross development, however a lot of it was too late, and worse a lot of it is ugly. I have never found a rich client building experience as good as Delphi. I still think it is one of the best IDE / GUI Builders packages ever assembled. However when I went for a look back at in a few years ago, the price was eye watering, and unfortunately it seems if I want that style of development my best bet now is C#.
And as best i can tell, media production has been long held bastion of Apple.
It likely didn't hurt that OSX is deep down a BSD, and so you could switch out the L in LAMP and be on your way.
It wasn't until Mac browsers adopted "standards" which matched the Windows way of doing things (96dpi, sRGB, MS fonts) that the Mac was really a viable web development platform.
Looking at what we do on websites today, and how much it costs to build it, it sure feels like we've been taking steps backwards for quite a while.
It's easy to blame management for shifting focus to enterprise software, but they had to try doing something new.
"In 1996, Hejlsberg left Borland and joined Microsoft." "Since 2000, he has been the lead architect of the team developing the language C#."
1997: "Microsoft also offered Mr. Hejlsberg a $1.5 million signing bonus, a base salary of $150,000 to $200,000 and options for 75,000 shares of Microsoft stock. After Borland's counteroffer last October, Microsoft offered another $1.5 million bonus, the complaint says."
I learned Windows programming on-the-job using Delphi and then thought I should investigate a "more common" C++ environment (I already knew console-based C++). Learning that _Visual_ C++ had no similar RAD environment was really shocking. My resource editor isn't attached to an an automatic code generator? Say what?
After that experience with Visual C++, I'd assumed Microsoft did roughly the same thing with "Visual C#" and never gave it a more than cursory look. I also had the fear that this new .NET thing would be yet another technology they hype then deprecate. What's the official Windows GUI framework this month? Is it Win32 Or maybe MFC? Or was it WPF or WinForms or some new XAML thing? (Sorry, went off on a tangent there...)
I currently miss the rapid development tools we had 15 years ago. There is definitely a need for a HTML5 RAD tool - ideally with the combined functionality of JetBrains WebStorm and Adobes Dreamweaver.
I don't buy the argument that the IDE market was dead. If properly managed to cater to enterprise developer's needs, delphi could have still been big. A lot of commercial software got built with delphi not because it was affordable, but because it was better than the alternatives.
I did a marketing internship at Borland France around 2004 and Delphi was still the cash cow but a great deal of effort was put toward "enterprise" softwares like StartTeam, Together etc that were from acquired companies IIRC.
It's easy to judge now but back then the shift toward the web apps was slowly happening, C#/.Net was not especially a success ...
It was in Portugal, but the focus was in ASP.NET not WinForms.
One thing that I liked a lot about Delphi (compared to VB for example) that there were plenty of free (sometimes with sources too) VCL components that could be downloaded and used for your own project.
It's funny how gcc was around for a long time, but it was only in the 2000s that cash cow compilers started dying. I think that coincides a bit with Linux and OS X becoming popular for developers. For example, it wasn't until 2005 that Microsoft started providing a VS express SKU.
You see Linux start to kill old school commercial Unix (like Sun) around the same time. Probably the same trend.
But the really magical thing about open source is that it never dies. And there is always someone willing to look at the code and fix a bug, or add a feature. And if you had a new architecture and no budget you could not afford the NRE charge of a big compiler company to build a code generator for you. And so it got incrementally better. Bit by bit. And the better it got, the more useful it was, and the more useful it was the more people used it, and then at some point it crossed the point where the economics of using a free compiler and dedicating some staff to fixing the problems you had made more sense than buying a compiler and waiting for the compiler company to fix bugs.
It really is a fascinating thing to consider and I expect that someone could write a very entertaining book about it at some point.
Someone already did write that book, and that someone is RMS! If you haven't read it already, I highly recommend "Free Software, Free Society". And in the spirit of things, it of course available Freely: https://www.gnu.org/philosophy/fsfs/rms-essays.pdf
(Though I do have a hardcopy which I'd never part with.)
Interestingly, at that company (a defense contractor) it was the government more than anything that changed that attitude. There were a lot of projects initiated by the DoD designed to test whether Linux and other open software were a good choice. Attitudes slowly came around.
And it (paid=good) is not an entirely unfounded position. There was some really bad OS software, and Visual Studio is still top by some measures (quality of the debugger). But the amount of pain noncompliance of the VS compiler brought was just frustrating. And if you want really fast code on x86 it still makes sense to buy the Intel compilers (C++ and Fortran).
What happened to Borland Delphi?
Why has C prevailed over Pascal?
It derives from a Steven Wright Joke:
"All of the people in my building are insane. The guy above me designs synthetic hairballs for ceramic cats. The lady across the hall tried to rob a department store... with a pricing gun... She said, "Give me all of the money in the vault, or I'm marking down everything in the store..."
It was. That purchase spawned a career out of my evening hobby. Then, a few years later, when I was mostly using C or VB, and only firing up Pascal for fun now and then, Delphi came out. I was super excited. Until I realized I could barely use it, and none of my local book stores had anything on it that was helpful to read.
By the time the internet came along, and I got into full time development.. .net was out, with "academic" pricing for VS 2003. That purchase brought me real jobs, at real companies.
So, for me.. Borland got me curious, got me hooked, then I switched to tools that got me money. Maybe that's because the ecosystem had changed, but I can't be the only one.
The web was around for about decade before Microsoft .NET arrived. In fact I remember coding Delphi in an evaluation copy of the IDE given away on a cover CD from .Net magazine (not to be confused with Microsoft .NET). And then a few years later downloading a pirated copy of a beta release of Visual Studio .NET.
It was actually the web that introduced me to Borland's Windows IDEs. I'd used Turbo Pascal extensively, but then got hooked on Visual Studio once I migrated away from DOS. Then I started seeing talk of these Borland development environments for Windows and thought "I love TP, so why not give these a shot". I actually much preferred those IDE's to Visual Studio as well, but alas my career and personal interest was switching to non-Windows technologies at that point and thus I never really found a practical use for Borland's Windows compilers.
A few years back I did need to throw together a basic Windows app for some clients, but by that point Delphi was dead and I'd forgotten a lot of Pascal's nuances anyway. So I ended up knocking up something in VB.NET; which was actually less painful than I remembered from the .NET 1.0 days. In fact almost pleasurable. But for all of Pascal / Delphi's warts, I did very much prefer that language over any of the iterations of Visual Basic. In fact I think I'd probably go further and say I preferred it over C/C++ as well.
As for what libraries VB had for networking, there was some HTTP OCX that was bundled with Internet Explorer (and I don't mean the Trident renderer), which was awful. But aside that, there was only a basic wrapper around the Winsock C libraries. To be fair, the Winsock OCX was pretty decent fot what it was, but you were left to write all the host layers (OSI) for yourself.
My post was just to say that Delphi had quite advanced TCP/IP components options compared to other languages of that time (I cited VB because it was supposed to be the "easiest" language of the period)
I assume you mean 2002? FYI .NET was available to some developers before 2002, that was just the first non-beta release :)
> I am fully aware VB and VB.NET are two different languages...I cited VB because it was supposed to be the "easiest" language of the period
I mistook your post to reference Visual Basic because I mentioned it in my post where I discussed VB.NET. I say this because opening your post with "actually" suggested you meant your reply as a correction to my comment. So it wasn't clear to me that your post was intended purely as an interesting yet tangential anecdote.
But to be fair, we are talking the 90s and HTTP wasn't as ingrained into technologies as it is today. Heck, back then parameterised SQL hadn't been invented; Internet Explorer was basically the only browser (Netscape had largely been crushed and Opera was non-free); most computers still shipped DOS (if just as a bootloader); and classic ASP was a popular server side framework. So it's easy to be critical with hindsight but the whole ecosystem was still maturing.
Eventually I got around buying TPW 1.5 and Turbo C++ for Windows for personal use.
What a shallow memory. I remember how Microsoft cornered Borland and others to use some undocumented features of their OS, then make them incompatible. Remember, back in the 90s updating software on mass scale was a PITA, end users were expected to never update.
Also abusing their dominance to aggressively target key developers and contractors, copying any good application in the ecosystem and bundling it.
But SV didn't learn the lesson and we are now in more abusive walled gardens for or mobile phones. And some young people parroting how wonderful Microsoft and Bill Gates are today.
Compared to what Microsoft used to be like - recently Microsoft has made some pretty, surprising, awesome moves.
Specifically their open source movement (including .Net). You would never have seen that 10-20 years ago. I'm sure if you walked into Gates's office and said "I think we should open source this" - I have a feeling he would fire you on the spot.
They even now support Linux on their Azure platform - and that's not a it-will-run-but-you-are-on-your-own.
Now, I'm not saying Microsoft is a saint or that I would want to work for them. But considering they didn't take action against the mono or ReactOS projects makes them ok in my book (not that they would have any real legal case - but they could drag those projects through an expensive lawsuit which would just end up with a deal to cease development).
Microsoft's new Open Source strategy needs to be viewed with exactly the same amount of suspicion as IBM's and Oracle's.
It may just be a move to reduce their expenses, since they may get unpaid software contributions and testing (Oracle's CEO pretty much admitted this in an interview).
I'd like to think that we will get to a more ethical future.
I wouldn't be all that snide about them. However, I can't say I find their platforms particularly palatable in general.
Some of Borland's problems were indeed Microsoft's doing. OWL vs. MFC, for example. OWL was the first object-oriented library for Windows, but MFC eventually won because MFC was always first to support Windows features. OWL lagged behind because Borland did not have access to in-development versions of Windows.
Borland applications such as Quattro Pro also suffered because Borland did not have access to in-development features of Windows such as OLE 2.0, and because of Microsoft bundling productivity applications into an Office "suite".
While some of this is Apple and Android, much of the root of this problem lies in the carriers themselves, not the software makers. Carriers have a long standing history of building very high and tight walled gardens around their networks, the devices on those networks, and even the versions of software that run on those devices. It has actually gotten a ton better since the App and Play Stores have come around. By building those walled garden stores the Apple and Android have pulled a lot of the burden for developers from individual carriers to single platforms. I would much rather have to work with a single walled garden such as the App Store than have to deal with coming up with a version of software for each carrier.
I would bet that the skeletons in the closets of these tech companies are far more darker then what we find out about. i guess the short point is, usually the same people who point(ed) at the evils of microsft praise other companies who have done far worse.
* Chinese manufacturers aren't really "Silicon Valley tech companies" and workers there committing suicide has more to do with Chinese culture and poor work practices than technology.
* Keeping money in Ireland is not a crime. The government set up really dumb tax laws, and companies responded rationally to them.
* Patents suck, but again, it's a government issue. The government sets up patent laws, and you have to abide by them. Some companies are more abusive about this than others, but they all have to follow the law.
* Uber making false calls was unethical, and possibly a crime. Uber does suck as a company but not all tech companies are Uber.
* Google didn't approach Facebook to fix wages. Steve Jobs did that. It was illegal and the government fined everyone involved quite a bit of money (although probably not as much as they should have).
* "Google" doesn't sit on Apple's board. Some people from Google used to sit on Apple's board. If there is a conflict of interest they are supposed to resign, and that's exactly what happened when Google started competing with Apple's iPhone.
* Google doesn't bully suppliers, Apple does that (sometimes). Google doesn't manufacture Android phones, they get other companies to do that for them. Yes, even that Nexus phones.
* Microsoft was convicted by the Department of Justice of anticompetitive practices. If I remember correctly there were felony charges. If you don't understand how seriously unethical they were in the 80s and 90s, you haven't been paying attention at all.
It was over IE as well, OMG they included a browser. Laughable now days, right? I used Netscape at the time and never had any issues installing or running it, even when IE hit 95% of the market.
Oh, and there has been more than one lawsuit to keep up with. A lot more than one lawsuit:
It's really hard to describe the full impact of what it was like to have everyone in the industry working under the constant fear of getting targeted by Microsoft for more than a decade, and the way that this shaped the market and technology as a whole. Summing it up as "LOL they bundled a browser" betrays a really massive ignorance.
1. Stay so small/niche that Microsoft won't notice or care about you.
2. Avoid selling software as your primary source of income. This is basically a variant of #1.
3. Try to get bought by Microsoft. Ha, just kidding! Everybody knows that Microsoft doesn't buy stuff that's 'Not Innovated Here'.
3. Gamble. Hope you corner your market and extract as much value as possible from that market before Microsoft figures out what you're doing and enters your market. Or (later) C&D's you over some patents they have.
The weird thing is that outside the Bay Area (in some parts of Canada, at least), I still see startups recruiting for a #1-like business model, more-or-less: "The best $SOCIAL_MEDIA iPhone app in $CITY" or whatever. They're not actually afraid of Microsoft anymore, but it's like the mentality didn't go away.
Apple could have easily had this market and more if they'd opened up.
Maybe you didn't work as a developer in the 80s. I used Borland, Watcom and many other vendors along with many other OSes as well. Magically in the mid 1990s you could even run something called Linux on your PC.
They stopped selling $49.95 compilers with IDEs and tried to be an enterprise company. People still buy IDEs and compilers. If they had kept doing what they were doing and improving their products, they would have been fine. Instead, they wanted the big money and it didn't happen.
It's weird thinking we used to pay for compilers
People still do. For example, Intel sells several compilers: https://software.intel.com/en-us/intel-compilers
"In Search of Stupidity: Over 20 Years of High-Tech Marketing Disasters" http://www.amazon.com/Search-Stupidity-High-Tech-Marketing-D...
I'd highly recommend it. Technology changes, but people don't.
Microsoft countered with the Quick languages.
Borland made Turbo Pascal for Windows and with Objects and then made Delphi.
Microsoft countered with Visual BASIC.
Borland made Borland C++ and JBuiilder.
Microsoft countered with Visual C++ and Visual J++/J# and then later Visual C#.
The free IDEs and Free compiler languages ate into Borland's sales. Eclipse, Netbeans, IntelliJ, BlueJ, Sublime Text, GNU C/C++, Apple XCode, FreePascal/Lazarus, Ruby/Ruby on Rails, Python, Code::Blocks, etc.
In 2005 Microsoft introduce Visual Studio Express a free version of their development tools.
Like Amiga, Borland had the superior technology, but cheaper/free alternatives undercut their sales.
Mostly it was the free and open source revolution that did Borland in.
In the height of the enterprise transformation, I asked
Del Yokam, one of many interim CEOs after Kahn, "Are you
saying you want to trade a million loyal $100 customers
for a hundred $1 million customers?" Yokam replied
without hesitation "Absolutely."
I'm saying this from a personal experience running a B2B company for 7 years and switching to B2C model three years ago. I would never go back.
Sure, having a couple of big customers looks like a more stable option at first, but when big ones hit the ground, they hit hard.
The levels of stress are beyond compare.
Of course, there are drawbacks and some things are different. If you aim for large market, you have to invest in marketing/PR much more - but you are allowed to care less on the customer support front.
(I hope I won't be eating my words in a couple of years, but from current POV, it seems much better to have a huge base of small customers than a few big ones).
Losing $1mil can either mean: a) losing 10000 customers b) losing 1 customer. What do you pick?
There are more customers who can afford 100$ than 1mil. That's the important factor, not the way you want to present the numbers.
If anyone is interested, Delphi today is on life support and exists only because it still has a strong base of followers from the Borland time.
The rapid cost of development has pushed Delphi out of reach for younger generations who are not used to paying for development tools.
People evaluating new development tools want to see and be impressed by what can be done with it. That sort of thing just isn't a priority for Embarcadero.
Sprint gave complete control over all the aspects of document production. Loved it.
I'm watching Qlikview do this right now - their original product was and still is a client app for Windows. It allows any business user to pull through datasets of pretty much any source as a poor mans ETL, then it allows that user to do BI tasks in a very simple way. Where they are succeeding is that they haven't given up on this market, but are using it to drive interest from inside companies - eventually skunkworks divisions show its value to the business, which then buys the server software and licenses.
It's growing pretty rapidly, and they seem to have a sustainable model. But it's driven by the individual user.
"On paper this may seem like a fairly minor adjustment, if you have the attitude (as Borland executive management had) that developers are a dime a dozen and any developer can be applied to any product or problem space. That may work for technical programming skills but it doesn't work for passion."
Regardless of what the technical abilities of the product, it's a good reminder that a product made great by the hard work of people who believe in it should be mindful to prioritize their ambitions in product decisions. This is particularly relevant in the open source world.
Didn't Borland attempt to charge for the use of a c runtime module? They attempted to profit from software developed using their c compiler. So, not only would they make money selling the compiler, but anyone that used programs written with their compiler would have to pay also.
Somewhere around that time, they lost the whole c compiler market, I think.
Basically the EULA forbade the use of Borland C++ to write compilers.
Anyway, why does answering this help with Borland?
Sun has developped a lot of software that was given for free (AFAIK): NFS, NIS, ...
We always used the UNIX vendor compilers back when I was doing UNIX development in the .com days.
Sun's got nothing on SGI in that department :)
It would be interesting to explore whether or not there are any commonalities in terms of what happened to Sun, Novell, Borland, etc. And if there are, it might be interesting, in turn, to ask if any of those lessons would be useful to contemporary software companies (especially startups, given the audience here at HN).
I suspect that there are some common factors that could be found, but I don't have a good feel for what they are off-hand, aside from falling back on cliches or tautologies.
Borland failed to deliver usable solutions for Linux and web at the time and after I got used to new tools I simply did not bother to try anymore.
We had a good product with fanatically devoted users; graphic artists persisted with LP on Mac OS 9 for years after Mac OS X came into common use.
The problem was that former Apple and Pepsi CEO John Sculley was a major invester. Live Picture's image editing product, also called Live Picture - was regarded as a tool by Wall Street and as Sculley told us one day, "The street does not value tools companies".
So he tried to turn us into some manner of internet company so we could have a big IPO. Really the best he could come up with was that our - admittedly superior - competitor to apple's quicktime VR be used over the web for consumer product research.
He actually showed us a demo that depicted a virtual convenience store shelf in which one could use the mouse to pick up a tube of toothpaste than look it over.
LP original retailed for $4k but at the time it was $600. So he was going to drop a wildly popular six hundred dollar product so we could make a little coin by measuring websurfer response to animated toothpaste?
The $4K to $600 pricing drop was also a serious problem. While $4K was definitely too expensive, dropping the price so abruptly alienated our early adopters.
A while after I left LP, I found a Java memory leak detection tool called I think Optimize-It. And yes garbage collected languages do suffer memory leaks if you don't know what you're doing, often seriously so as when I had to configure a job to reboot a client's server because it kept running out of swap space.
Optimize-It was independently developed and published at first but Borland acquired it, then sold it for quite a lot of money.
While a tool like that is indeed valuable, it's a lot cheaper to just reboot your server every night at midnight.
It should have sold for maybe $200 rather than the thousands of dollars that Borland charged.
I am quite sad as Borland, Live Picture, Microport, Seagate and the Santa Cruz Operation once offered really good tech employment to Santa Cruz County.
There are other companies there now so it's not like there is no work, but attempting to transform what really was a tools company so it would sell during the internet bubble threw well over fifty hard-working, incredibly dedicated and talented people out of work.
A coworker and good friend became homeless and quite desperate. I am pleased to report he did finally find a job and so was able to pay for a place to live but when I spoke to him while he was homeless he had lost all hope of survival. Someone like him should never have been homeless.
Software is craftsmanship + plus 'copy' from the distribution perspective.
That's why open source really provides good results. Open source is free because there is no barter involved. There is no real declining marginal utility. Every technology that was in the position to grow a reputation busts exactly the moment the last user installed a copy ...
The marginal utility has to be introduced by design :). Then you get money ... because you have a price.