Even if Firebug itself is no longer head-and-shoulders better than other tools, the fact that the basic model was copied and polished by Microsoft, Apple, Opera and later Mozilla themselves shows that Joe Hewitt had a pretty great idea back in early 2006.
Here's an article about the beta 1 refresh from Nov 2005, so it was out even earlier than that.
Your google-fu needs a lot of work as even my first search showed a lot of articles in 2006 and 2007 talking about the IE dev toolbar, e.g.:
EDIT: Interestingly reading through some of the comments on the first post, it seems people claim the IE dev toolbar was inspired by http://chrispederick.com/work/web-developer/ and the initial release date for the IE Dev toolbar was Sept 2005 (see http://c82.net/posts.php?id=23)
Note also that the release notes for Firebug v0.2 state: "This is a very early release - the code is only a few days old." Whereas the IE Developer Toolbar probably began a couple of months before its first release. So even by looking at the first mention on the web, we're comparing a days-old Firebug to a fairly-complete IE Developer Toolbar.
Spock: I am Vulcan, sir. We embrace technicalities.
There are times when it's important to be pedantic, like when you're writing software. (I don't think we normally use the word "pedantic" in these cases, but I'm not going to argue the matter.) And there are times when it's not important, like when an article mixes up the order in which two pieces of software were released.
Not to mention i was using Firebug before IE had any development tools so it saddens me that Microsoft gets credit for it.
In ipython I use autoreload (thanks to a tip on HN) so that the code in the objects in memory is kept in sync with my editor. My workflow is: load up a bunch of objects I'm working on, try to get the expected output, fail, change code in my editor again, check the the output, repeat. Within ipython you can even '%ed obj.broken_function' to edit it directly in vim. There's no doubt that the auto reload discovery improved my efficiency. Aside from the time it saves, I'm able to think of my running code and on disk code as the same thing. Don't underestimate how powerful that is.
But as I said, my code is now mostly frontend code. After over 15 years in web development the reload cycle has become second nature to me. It's silly though. I have a lot of state to maintain in the js and every reload is expensive. In my specific case I could be looking at 2 minutes to even load the objects from the web server.
I'm going to have a look at the chrome options to see what's available in terms of hot code swapping. I disagree that it's important for all browsers to support it. Even just finding a good solution in the browser I do most development in (chrome, presently) would be a major win.
Depending on your scenario this can be a major win. On my project it's not uncommon for it to take 10 minutes to load and process that data from disk to get it into objects in memory. For me I can do that once and then carry on working with my data an code all in unison.
Whatever works for you. I'm pleased with my current environment on the server-side. It works very well for me.
Let me rephrase then. There are amazing server side REPLs that allow me to change code on the fly, why can't I change code in my editor and have it reflected directly in my browser?
Much of the way the web is being built is evolving from static pages and presentation to full-blown applications running completely in the browser which is why there is such a shift to providing tools for visualizing complex pieces of browser operations like rendering, compositing  and painting .
I don't think there's any fundamental problems with the tools being developed, the reason workflows are broken is because many web developers aren't being empowered to learn web development fundamentals. I can't imagine starting out as a web developer in 2013 and trying to jump in with all of these abstractions, tools and workflow items to try and understand; it'd be like trying to jump in as a newcomer to Rails at the latest version without all the context of the changes that lead to design decisions that currently make up the latest iteration of "The Rails Way". 
I think tooling is in a pretty good place now; browser vendors need to start educating web developers about how to craft workflows and use the tools out there. My goal has been to try and educate more web developers about browser and web fundamentals  along with workflow fundamentals like automating tasks using Grunt. 
 - http://www.youtube.com/watch?v=x6qe_kVaBpg
 - http://www.youtube.com/watch?v=Ea41RdQ1oFQ
 - http://words.steveklabnik.com/rails-has-two-default-stacks
 - http://www.youtube.com/watch?v=Lsg84NtJbmI
 - http://www.youtube.com/watch?v=fSAgFxjFSqY
Thanks for the fundamentals video & grunt intro, both were helpful to me.
I will say though (as a lighttable backer with the fancy-schmancy t-shirt to prove it) that the current builds of light table are... incomplete. The idea of lighttable is very exciting, but I haven't really found the current builds to be much use. They are too similar to existing editors / workflows, and as it's beta you're sacrificing stability for not a whole lot of gain.
I was working in background on my own system before light table appeared. am back on it now in my spare time since light table is aiming away from my requirements. every time I see an article like this i think I should have worked harder to get it working by now
Out of curiosity, what are your requirements?
It's hard for me to imagine that LT is really aiming away from anyone :) It's a platform for building whatever workflow you could want, integrated with any system/language/service you can find. That particular aspect of it hasn't been released quite yet, but it will be the major focus of the beta.
I'm researching browser development frameworks and languages to develop an XML editor. Editing should work more like Google Docs than CodeMirror, and with rudimentary form support (think key-value metadata, not complex forms).
So far I have settled on using contenteditable, but structural editing of XML in document oriented fashion is still an open question. One user action can result in many XML tree changes, like pushing enter twice inside a paragraph should split it in two (and thus close and create any number of tags). Also, there is no schema to describe XML document's editing workflow. Your post on IDE as a value gave me insight for a possible implementation.
Also, big thank you for LT. I'm enjoying using it immensely.
My background is as an architect (buildings). For me, most of the issues I'm interested in solving are to do with interaction with visual information (2d and 3d) rather than working with reams of code.
I was most excited where I saw the Light table demo where there was dynamic editing of a game (inspired by the Brett Victor video?). The more recent developments seem to have largely focussed on making it have the full functionality of a traditional editor with some additions. What I am looking for is something that can take over not just from a code editor but also take over from other kinds of programs. Examples might be something which can replace my photoshop actions palette or something where I can organise my screen to compare Google Earth and Openstreetmap data side-by-side or on-top of each other (and click on things to query what they are). Generally these are conceptually simple tasks which current tools and workflows make very difficult to implement.
I have to run but I hope the above makes sense. I'd probably need much more text for a fuller explanation. Happy to discuss.
Also interesting discussion at:
I am an Emacs user for decades, and I am amazed that Emacs is still superior to almost every other editor. I think the incredible easy plugin system and many powerful plugins (like org-mode) are the reason why Emacs is still alive today. But I am curious how LT develops.
The Chrome team has shown that they are very interested in moving the Inspector forward and have succeeded in integrating local files access via the editor, SASS support, Source Maps support, and a lot more. It is to the point where it would not be crazy to consider building a site entirely within the inspector. While the editor is not as good as others, you do gain simplicity and the ability to patch running code. The workflow is getting better.
Writing a great inspector is the hard part. Comparatively, writing an editor should not be as difficult. Chrome built the inspector first and is circling around.
The only integration I would really care to see (perhaps via ST2 Package Control) is the ability to directly pipe into the Inspector and patch running code. For large projects, especially in development mode, it can be a drag to ajax in >200 source files (even from localhost) and refresh every time you make a change.
Running SPDY locally helps a lot. Even with all those files I hit DOMReady at about 2.7s with no concatenation.
Every time I've encountered requirejs in an application this has been representative of my experience with the tool and the pain of having to wait that long every time I reload a page simply doesn't seem worth it to me. There's also somewhat of a mismatch between async loading assets in development and sync loading them in production which I've seen responsible for bugs that show up in one environment but not the other, and/or vice versa.
A couple of questions for you: do you think the r.js optimizer taking 2.5s to compile is related to the complexity of the dependency tree in your application or just the number of files being loaded? Also, considering the previously mentioned mismatch between dev/prod and async loading, do you think it is appropriate to use something like SPDY to obviate the pain of a lengthy DOMReady event in development?
 - http://searls.testdouble.com/posts/2013-06-16-unrequired-lov...
Using r.js in development isn't the worst idea. It's worth seeing how long it takes in order to make that decision. Compiling tpls is much faster (grunt-contrib-jst) and adding that to your grunt watch & including it directly is a good way to save time. I think it takes a long time on my end due to the complexity of the dependencies. I only include exactly what each module needs so some dependencies may be as many as 6 levels deep, or more (haven't really checked).
SPDY makes a big difference for me (big enough to ignore the problem for now) and I don't mind using a self-signed cert in dev.
EDIT: I hadn't been compiling tpls using JST in dev until I wrote this post - a great side effect is, it actually shows me now where the errors in my tpls are! Previously any tpl's stack terminated at the code that ajaxed in the tpl. This is far better for debugging and brought my DOMReady time down to about 1.75s.
Instead of having >200 files that need to be compiled with r.js every time, what about compiling them to intermediary builds? Abstract your code in to some bigger modules and wrap them up in a little bow with a nice interface, and basically consider that part of the code "solved" and focus only on what is currently changing or being built. Your build process should reflect this!
I've been doing this on larger projects with RequireJS and r.js and my build times are very short... it just builds from maybe 15-20 files, where a few of those files are the result of some other r.js build.
Now, should r.js possibly do things to better manage this sort of approach? You bet! There are many mature build environments that have a similar approach. Maybe r.js needs some sort of concept of "linking"?
Personally, I don't mind having to manage intermediary builds, because... that intermediary build is making something that I can use in OTHER projects... I haven't really come across a situation where pure business logic is dominating an application. Almost everything that a program does can be abstracted out and reused! I'm sure people out there can provide plenty of examples to the contrary, and I'd love to hear those!
grunt-contrib-watch is a pretty vital part of my workflow, but I don't use the livereload options because I can't afford to lose that state in the browser.
 - https://github.com/searls/extend.js
Is this really that desirable? I can imagine a lot of scenarios where this would cause unexpected behavior.
Most other platforms don't support this, do they?
Would it be possible for you to add the source code of the play application (server) to your github ?
Thank you for your plugin :)
'We have a lot of stateful frontend code that's hard to debug and test.'
'Maybe you should strive for less stateful frontend code?'
'Nonsense! We should completely reingeneer our workflows and toolchains to accommodate whatever we're doing right now, because everything else is stupid and outdated.'
Perhaps only tangentially related, but our workflows are significantly related to the technologies we use, right? Ian Hickson in an interview 
"The Web technology stack is a complete mess. The problem is: what would you replace it with?"
"The remote debugging protocols are incompatible with each other, and each has a different features."
With weinre you start a node.js debug server and add one script tag in your html. Then you start the debug client in a webkit compatible browser and finally the browser with the page you are debugging (it can be anything: mobile, remote or not).
weinre strongest points are:
- "weinre supports remote interaction, so you can run the debugger user interface on one machine and can debug a web page running on another machine. For instance, debug a web page displayed on your phone from your laptop."
- "Because weinre doesn't use native code, the debug target code will run on browsers without specialized debug support."
Also try Live Reload, mentioned in the article, for a nicer editor+your_tool+browser integration.
- "LiveReload monitors changes in the file system. As soon as you save a file, it is preprocessed as needed (SASS, LESS, Stylus, Coffescript and others), and the browser is refreshed." (You don't hit reload, it uses a browser extension or a script tag)
- "LiveReload can invoke a Terminal command after processing changes. Run a Rake/Cake task, a Shell script or anything else you need."
One of the best examples of an efficient workflow (at least in theory) that I have seen is the Play framework, along with a browser plugin that causes the browser to refresh every time source code is saved (http://www.jamesward.com/2013/05/15/auto-refresh-for-play-fr...).
People are trying to "fix" the web, when it should be replaced or redesigned. Thus problem is human, not technical. Whom should decide what replaces all the TLA that is required for the "modern" web? Too much time and money is invested in browsers, servers, languages, frameworks, training, tools etc...
Not that's all bad. A lot of technology that wouldn't have been developed, has been. It's changed a lot since I started building sites in '94. But I still can build nicer apps for the desktop, with better interfaces, better performance using a single programming language faster than I can develop "rough" equivalent web application.
We plan on releasing our dev toolset open source soon, but we want to retool require and handlebars to be able to also be hot-swapped without reloading the browser.
I guess I am trying to say you are the master of your workflow, if it sucks, make it better.
It's a bunch of static text, for goodness sake.
The workflow is actually just like with any other programming environment. You make changes to the code with your editor and then you reload the page or restart the application. If you really need to, you can use a debugger to see what's going on.
Fancy integration between the editor and the debugger is nice, but it hardly breaks the workflow.
That being said, there is certainly room for improvement when problems happen.
Some part of "web development" could use improvements.
It isn't really a read-only environment! It can't read anything! All the browser can do is GET and POST things...
...so maybe the problem with web development is that it is all based on files?
I don't want to defend Firebug, but in my experience the Developer Tools are a pain to use in comparison to Firebug. (e.g. autocompletion of css properties and property values only works with tab and not also with return, it takes too many clicks to see the metrics,....).
What's the features that make the Developer Tools better in your experience?
Check this out: https://developers.google.com/chrome-developer-tools/docs/ti...
Here is another great example - http://remysharp.com/2012/12/21/my-workflow-never-having-to-...
My two cents: find solutions... People who make tools are just like us, some empathy, accept what you cannot change :)
To make changes to a small piece of code, you don't want to have to go through a huge iteration of running up your server, going to your browser, moving around the app to get into the required state and then trying out the bit of functionality. No, you just write a test for that bit of code and run it. No browser needed.
The cycle he describes is what you do at the very end. Once.
I'm so glad that Kenneth's post just proved my concept!
LIVEditor combines a Scintilla-powered code editor together with a Chromium-powered browser, it's very deeply integrated since it's a same software.
I debug css that way. No more more memorization and you don't even have to start another program.
Adobe's new Edge Code does this...
Of course, this may only make sense in 10-15 years when there's even better tools than today.
Regardless, there will always be a desire for it to be easier, and it will always be work.
The basic principles and functionality has remain essentially unchanged the entire time. We're still mainly setting breakpoints, stepping through code, inspecting variables, and so forth.
Those of us who have been in industry a long time have seen much greater gains from the use of strong, static typing and unit testing, for instance. The best way to use a debugger is to not use it at all, because many of the bugs have been prevented outright by the nature of the language used, or at worst caught immediately by the compiler or automated tests.
It must have been just over a year ago I last checked, and it looks like it's been available since last July!
On the other hand, if you use the right development methodologies such as unit testing, MVC, MVP, MVVM, and/or frameworks that translate statically typed or functional code to JS, the write-build-run-debug cycle is not an issue, because in general it only becomes an issue with a wrong approach to programming. This is totally the same as in non-web-programming.
And HTML trundled along unwanted in WHATWG through those times, scorned, rejected as inadequate and not suitable. And yet, XHTML2 got closed down, everyone switched back to the HTML path.
And processes involving changing web development from an environment to an output format - Google's GWT and Dart, don't seem to have gained much traction.
I don't know what happened to Xanadu, either.
You probably should dig into the history of Rich Internet Applications (formerly XUL, before Mozilla decided to stamp out the naming confusion with their own XML vocabulary), in the days before Ajax really stabilised and things drifted back into the browser.
In an ideal world, now that the web has shifted from being a bunch of linked documents to complex applications, we'd have a development stack aimed at developing applications, with sane means of specifying UI layout and behaviour, low overhead client-server comm protocols etc.