Rock on, Microsoft — this is great news for Windows developers. As much as the open source community (of which I am a part) loves to rag on Microsoft, they seem to have recognized the threat of platforms moving off of Windows (Steam, iPads, Android, ...) and are taking reasonable steps to encourage development for Windows (make the developer experience better).
This — a reasonable response to a potential threat — is a huge step for Microsoft. Kudos, VS team.
You've obviously never suffered a large project. Crashes, memory gobbling, random debugger failures, glacial speed, lag, project configuration corruption, UI glitches, refusing to load projects, half complete refactoring operations, search/replace just stops, having to reset environment at least once a day, grey screen of death.
And that's just the IDE. CLR is the only framework and language combo I've used which will quite happily just stop working with no human intervention one day.
It's a bag of shit for me and I resent using it.
For ref, I've been building software on windows since 1994 from win32 to wpf to asp.net to mvc to WCF.
The only positive thing I can say is the money is good, but it's danger money.
Yes, agreed (having worked on middlish-big projects with maybe 40 VS projects for each solution, 45+ minute re-compile times (not from scratch) on the latest gen CPUs, etc) with several hundred million LoC but believe me, if you think that's bad, you don't want to see Eclipse/NetBeans/XCode/etc. with a project a thousandth of that size.
Slightly curious, what kind of projects were you working on that are "several hundred million LoC"... wouldn't 45-minutes be reasonable for that many lines of code?
I ask because I've never even come close to touching a project with that many SLOC and I was also under the impression that most modern operating systems are barely fitting into 100million+ LoC category, correct? This is your chance to redefine my perspective on "big project" haha.
Some C# developers use T4 to generate data access code, for instance. This can amount to several hundred thousand lines of code for a moderately sized database. If there are several large databases being accessed, like is often the case in reality, then I could easily see there being millions of lines of automatically generated code in a single project.
I know nothing about nothing, but having that many projects under the same solution its the biggest clue that you are doing something very wrong.
Even if you have a reason to have 40 projects in what should looks like one solution you can still create various solutions files with just the subset of projects you need. No one works on 40 projects at the same time.
We have a solution with more than 80 C# projects, and visual studio 2010/2012 handles it fine on modest hardware. I usually hit shift+F6 to build just the current project when I'm iterating on some change. This builds very fast because our individual libraries are small. It doesn't really feel "wrong".
Unless u have a very good reason to deploy and distribute 80 different DLLs it should feel wrong.
Many people think that In order to have a well layered and decoupled application you need to breakup every single piece in a separate project and that makes no sense. Thats what folders and namespaces are for.
For C#, maybe (not enough experience, but I would be surprised if what you say is a good practice).
For C / C++: You do that with static libraries, not shared libraries, and that's the only sane way to work: Have all library projects part of your main workspace, so you can easily debug and fix stuff in them, yet manage them independently.
For production/distribution you can always use a tool like ILMerge or SmartAssembly to merge dlls/exes together into something that makes sense for that particular distribution. I've had plenty of success with both tools.
Saying that, I do get what you're saying. Good practices can be taken to the extreme. Some approaches such as prism, take the idea of breaking the UI up into modules that can be registered with a shell - where on reflection, that type of flexibility is rarely going to be needed.
Yes but that's not the case if you're changing a dependency in the middle of the chain, as is more commonly the case.
Additionally, as far as I can tell, the main thing that slows compilation down isn't so much the actual compilation step as loading in and copying all the project references. Visual Studio does this separately, from scratch, for every project that you rebuild, since each project compilation runs in a separate csc.exe process.
In that case it's probably sensible to break it up not only into separate projects but separate solutions too.
What bugs me is when relatively small solutions are broken up into large numbers of projects. Very often it's done for no reason whatsoever other than aesthetics. They're often divided up "against the grain" too, putting every layer of your application (presentation layer, business layer, repository, domain model, services, interfaces etc) into a separate project, with the result that a single task requires you to make changes to several different projects.
The general rule that should be followed here is the Common Closure Principle: classes that change together should be packaged together.
We don't have hundreds of millions LoC, but we have a couple of million. We solve the problem you are describing with svn:externals, pulling in the DLL/PDB from other project/solutions as needed on svn up.
I agree it's not as comfy as having everything in one big solution, but being able to count your compile time in seconds as opposed to minutes makes it worthwhile.
We used to use several different solutions with subsets of the projects, now though I use the solution load manager extension, http://visualstudiogallery.msdn.microsoft.com/66350dbe-ed01-... which brings down load times significantly. It basically lets you specify how projects should be loaded, so most can be loaded on demand. We have ~150 projects in total, ~10 million LOC, and VS handles it almost as well as a small solution.
This is one of those anecdotes that just pollutes discussions, allowing people to cheer on their biases. My anecdote is that Visual Studio deals with very, very large projects with gusto (albeit far below the absurd "hundreds of millions of lines of code" scenario described by someone else, which if in one solution borders on insane). The CLR has worked ridiculously well for years on end with nary a hiccup in sight. And so on.
Visual Studio isn't perfect, nor is any IDE (Eclipse...xcode...Geez, turn on the coffee machine because we'll be spilling complaints all night long).
VS is the only large scale IDE I know of that refuses to add a search bar for options. Both Eclipse and Intellij IDEA both have it. Trying to deal with finding what you want in the myriad of options VS has is tedious. Also changing syntax colorings without something like resharper is also taxing.
Thanks, I would have never guessed it was that sort of search in 2012. I would have figured the more intuitive option was to put it within the actual options area. Then again, I guess I should have known better with the way search is in the rest of the Win 8 UI (though I don't use Win 8 enough to instantly think of that, lol).
But if you talk language, Java has never held a candle to C#.
The original C# language and virtual machine was inspired by Java.
Then, C# got:
* Better generics
* lambda expressions
* the yield keyword and compiler magic for iterators
* explicit interface implementation (rarely needed, but very well thought-out for when you need it)
* Type inference
* Dynamic keyword
(I'm forgetting some things, it's late).
Anders has guided the C# language brilliantly ... Java has been outpaced at every step of the way. So many somewhat radical features have been added to C#, and from my standpoint, every single one of them was very well-done.
Language alone does not mean tons of productivity gain. Building software requires more than just syntatic sugar.
C# has all these features yet I have never seen killer tools that changed the .net landscape like Rails did with Ruby. While I would not call those language features smoke and mirror, I argue that they are merely nice to have and not ground nor thought provoking.
At the end of the day, Java ecosystem still move forward way faster than .net. Spring, cloudera, datastax, alfresco, liferay, ehcache, jboss, tomcat, Embedded server like Jetty, and other great, free, mature, serious, and open source tools are available at our disposal vs taking to sales rep to buy licenses, which are a common activity in .net world.
I could pick those up one by one, but time does not permit. You are largely correct that there seems to be more experimentation in the the java open source ecosystem, and that some of the key parts of the same in .Net are ports from java (e.g. nUnit, log4net, nhibernate). Though in all those cases there are alternatives, it's just that the particular tool mentioned is most popular. Possibly due to familiarity.
With things like Cassandra, I don't care what language the server is written in as long as I can connect to it. This is IMHO the way forward, and not just for .Net. Though if you're looking for a noSql db written in .Net, there is RavenDb.
Sir, Maven is an awesome thing. Would you want me to do a query in Google for each and individual .net and the word "sucks"? Since when that matters?
There will be minorities that just happened to dislike everything. There are also people who just happened to use maven in the wrong way and ended up fighting with it.
This suggests that you have never used and experienced Maven. Combined eclipse with m2clipse and you will get awesome development experience. Imagine not having to download 3rd party library manually by visiting their website. Imagine autocomplete of the freshly acquired 3rd party lib via your IDE to also shows you the javadoc. Imagine trying to navigate to the 3rd party class and method implementation without setting up your IDE or messing with path/folder setup automagically. You cannot do any of these in .net nor vs.net.
I know RavenDB but can you compare it with Cassandra, Hbase? Not by many many miles. The latter two are battle tested by top most traffic website while the former has yet to reach that level.
I also argued that the javaee 6 stack provides way better, simpler, and modular approach to building back end systems. There is no equivalent EJB 3 in .net (I will be damned if you do another google query for EJB sucks. the old one is, but not the new one. Also experiencing the tools before making your judgement would not hurt). All in all .net framework for the most part of it have always been behind Java (except in the category of presentation/UI).
I prefer not to continue the discussion when the obvious is there right in front of us: C# has cool language features but honestly nothing has been groundbreaking in the .net world. The last one probably was asp.net mvc and the changes in the core asp.net as some sort of an api instead of the old asp.net webform stack, mimicking the JEE web profile approach.
Rake and the rest can be considered sub features of maven. Not. Even. Close.
Honestly, if it was that far beyond everything else, people in the .Net community would be talking about it a lot. And they're not.
> Imagine not having to download 3rd party library manually by visiting their website. Imagine autocomplete of the freshly acquired 3rd party lib via your IDE to also shows you the javadoc. Imagine trying to navigate to the 3rd party class and method implementation without setting up your IDE or messing with path/folder setup automagically. You cannot do any of these in .net nor vs.net.
Factually incorrect. The equivalent happens in VS via nuget right now.
The fact that many people in that hn thread said they had no problem with maven? Again sir, what you were doing were just linking gossips as oppose to using it, trying it out, experiencing it, understanding it uses/features.
I can link to many2 .net blog posts how people have left it because it is too limiting but that is not the point. Personal preference does not equal real world evident that suggests that .net is not that limiting.
Last but not least, your nuget can't:
Run unit tests without any setup.
Run integration tests without any setup.
Run code analysis as part of build.
Run code style as part of build.
Package your project and make it ready as dependencies to hour other projects easily without having to import the whole source folder as another project below a solution.
Deploy to hour app server.
Generate javadoc or .net doc.
I recalled there were challenges to use maven for .net projects. Would you want me to query people praise of maven? Or would you want me to query how .net community wishes or is looking for maven equivalent tools in .net and how nuget is just a piece of subset of what maven can do? Or would you like to take a look the current landscape of build and dependency tools and how almost all of them mimic what maven can do?
The equivalent of maven would be msbuild, msdeploy, nuget, and various other tools in which you have to setup manually and requires a lot of effort. In the rails world they required gems, rake, and bundler to match maven capabilities.
I think I've explained too much. There is absolutely no point to continue the discussion if all you do is merely performing google search query of Java bashing because likewise can be done with .net and that would be a time wasting.
Otherwise, let me know when there is a huge revolution in the .net world that shook the software development world because so far you guys just following java footsteps in almost every area except the c# language syntax.
C# - A language that can do (almost) everything: mobile, web, desktop, etc (Java still beat you guys on embedded devices).
VS.NET - An IDE that can do build, run your tests, UML modelling and many more...
Final thoughts on Maven: What I care is a tool that perform build for me and in 2013, validations are part of the build: validate that your code compiles (compiler), validate that your code can be packaged according to the agreeable standard (dll, jar, whatever), some level of behaviour validation (unit-test, integration-test) .
If you disagree then perhaps we have philosophical differences when it comes to good software engineering practices since the beginning.
Eventually you either: build something from scratch to mimic Maven on .NET ecosystems or use various tools (MSBuild, NAnt, NuGET) that perform the same workflow that Maven gives to you. Either way you got nothing like Maven in .NET ecosystems which is a huge loss for me since why would I learn various tools or build some piece of the puzzles on my own when I have _the_ tool that can do what we all have to do on day-to-day base anyway...
You're not saying anything remotely close to display how the .NET ecosystem is richer than Java. Perhaps because it isn't.
PS: Maven is composed by plugins, the fact that some of the plugins can do unit-test while others can do static code analysis are just... awesome.
Java just doesn't scale, if you want fast code you wont use java. C++ one of the best, and for quick development c#, you can always invoke C++ libs in C# anyway and both are supported in visual studio. If your into webdevelopment even there java is slow.. despite it has some fun libs, but most people use them because their lazy programmers, a good programmer wont rely that much on much external libs
Twitter uses Java. It seems to survive major events fine these days. I'd say it scales reasonably. You also have to take into account huge enterprise deployments.
As for 'fast' it really depends what you mean. Nobody's going to dispute that running a compiled application written in C is going to beat the pants off anything running on top of a VM, but is that speed factor important all of the time? Of course it's not. Most of the time a short wait is perfectly tolerable in exchange for the assistance in writing correct code that languages like Java can provide.
I attempted to write a simple web project in Java to learn it. I haven't used Java before, I normally use Python for web stuff and C# for other misc stuff.
It was hell. I installed JetBeans IDEA quickly enough, but it went downhill from there. My AntiVirus (Kaspersky) fucked with Java 7's networking meaning it couldn't connect to anything, so it took an hour and a lot of googling to fix that. Next step: make a struts2 project. Wait, maven doesn't like the archtype IDEA gives it so it explodes and doesn't install it. Ok, do that by hand. Next install a webserver, which one to choose? Install one, doesn't work with IDEA - have to install an earlier version.
Ok. Finally got a blank project up. Read the docs for struts2, brain hurts, uninstall everything, fire up VS 2012 and write the whole thing in C# and run it on mono. Easy.
I'm a TortoiseHg (mercurial GUI front-end) developer and an (occasional) mercurial contributor.
I think this is really great news, both for git, Microsoft and OSS in general. It is definitely a great move for Microsoft.
I hope they also add support for Mercurial in the future. Git is a great tool but I think Mercurial is equally powerful yet easier to use and understand (IMHO). It is not as widely used as git, particularly in OSS circles, but there are many OSS projects (e.g. Python) and many companies (e.g. Mozilla and Facebook) that use Mercurial very successfully. Choosing Git as the first DVCS they support makes a lot of sense, but Mercurial would be a nice second choice.
In particular, being able to use mercurial with TFS would be awesome in an enterprise context. Plus I'm sure all in the TortoiseHg project would welcome the competition if Visual Studio were to get builtin support for Mercurial as well as git :-)
While there are a lot of compelling reasons to use a distributed version control tool, there are also compelling reasons to use a centralized system. Having a lot of giant files (game resources are the canonical example) suits a checkout/edit/checkin system much better than a system that scans the disk like a edit/merge/commit system or a DVCS. These sorts of repositories exist within Microsoft (and DevDiv) itself. For that reason alone, TFVC won't be going anywhere anytime soon.
A couple of other reasons to use centralized version control:
1: You are a Fortune 100 company and need to have access control on your codebase (so you can give contractors access to only a few files for example)
2: The code you are writing is subject to regulatory control.
Interesting you should mention that. We have our code on a corporate SVN server but I used git (via git svn) to create a mirror for some offshore developers outside the export control bubble. Git allowed me to exclude a set of files very easily. Not an argument pro or con either system, just saying.
sorry, help me understand here. How does something who's only method of control is tied to the ide you use keeping any control over the source code?
you can still use git-tfs to use tfs like you would use svn. you can also just copy the file to another folder and suddenly everything is good. or you use time machine or another other backup mechanism.
Do i miss something magic that tfs does, that I don't understand?
I can't speak for TFS, so maybe they do something daft, but it's usual that when you get latest from the server you simply won't get the files you aren't allowed to. You won't even see them. It's access control at a finer grain than the repository level, applied on the server side.
(with git always giving you the whole repository, there's not much you could do with git, but but many systems don't do that.)
I've never used TFS but my understanding is that TFS server allows you to configure it to only give a specific person access to a specific set of files.
For example: One could tell the TFS server "Deny Contractor-X access to all files, except for files A, B, and C". I assume that this would only allow Contractor-X to access files A, B, and C. Even if they were using git-tfs.
> If I remember correctly, certain regulatory environments require that an audit log be kept of who saw which file and when.
Okay, let's say that's your environment. How do you prevent Mole Manny (who is a legit developer in your organization) from pulling down all the files in his project (as he's entitled to do), and then copying it over to his $super_sneeky_storage_system?
We've talked a lot about this where I am and we've concluded that to ensure a super-tight environment we'd have to do a slew of really heinous lockdowns (epoxy usb ports for instance) which would likely not really do much against moles but slow down and anger legit users.
Really? The only places I've run into regulatory requirements for audit logs of seeing data is when the data itself is sensitive government data or is legally protected personal data; outside of national security, those aren't the kind of things that generally apply to code (and in the national security space, I'd imagine that you'd need more comprehensive monitoring of your systems, desktop or server, such that using centralized VCS for the purpose would be redundant [alone, centralized VCS for that purpose has holes big enough to drive a truck through, since it can't monitor who sees information once it is checked out, only who checked it out].)
If you write medical device or flight-control software, you have to have traceability all the way from requirements to the delivered binaries. With a central server & build system, that's much easier to do & also explain to federal inspectors.
I can speak from my own experience there are times where depending on funding, portions of a project must be developed by U.S. citizens... but the bigger project can have parts developed over seas. Depending on funding and the project.
Centralized version control doesn't have any advantage over distributed version control for either of those scenarios. Particularly, it doesn't give you any more control over where the code goes after someone with authorized read access to the repository makes their own copy of it.
For that, you need comprehensive monitoring on every system from which the repository can be accessed that tracks what is done, and once you have that, it doesn't really matter what you do for VCS for that kind of monitoring.
One think that I've been impressed with is that changes you make from the command line are reflected instantly in the GUI. I like to change branches from PowerShell with posh-git. I'm using the GUI and command line interchangeably.
(Disclaimer: I work for MSFT but not in the git/vs group)
This is one feature that Eclipse don't have since you have to refresh everytime there is changes outside the IDE.
Every time someone says VS blows Eclipse I wonder what VS have that Eclipse don't. When I dabbled C# few years ago, I find the textual support for refactoring and such are much lacking in VS IMO. I haven't tried Resharper though.
My brother applaud VS for its WYSIWYG, but you rarely do that when you program in Java.
Resharper adds so much values to VS, that's it's hard for me to work without it. I've been using resharper for about a year, and it's simply amazing. My productivity has gone through the roof (I've been using VS since 2003). Alt+Enter gives you the magic.
Currently working in VS and Eclipse at my current gig. One thing that I definitely miss when working in Eclipse is the ability to move the instruction pointer around at my own discretion. Though, admittedly, I'm unsure if this is a limitation of the languages more so than the IDE. /digression
In the "custom-set-variables" section of your .emacs, and it will automatically reload any file open in emacs that changes on disk (excepting files with unsaved changes). While it's a little slow doing the actual reloads under Cygwin, it's blinding fast on Linux. Works a charm with egg, haven't tried magit.
This seems alright, but it definitely feels incomplete. I tried it out with one of my github projects, and the setup wasn't impressive at all.
First, it didn't automatically detect that there was a 'GitHub' remote. My first guess was that I needed to call it 'origin', but that didn't fix it. Instead, I needed to go into the command line and specify the master branch's upstream branch like so: "git branch --set-upstream master GitHub/master".
Second, as soon as I tried to fetch I got "An error was raised by libgit2. Category = Net (Error). This transport isn't implemented. Sorry". Turns out I have to use the 'http' link instead of the 'ssh' link as the remote destination.
Both of these errors could have been avoided automatically, or at least given better help. Branch has no upstream? Assuming that it's the only remote branch with the same name is a pretty good heuristic, especially when you do no work without user action. Don't support SSH? Try the obvious HTTP alternative, or tell/ask the user to try it.
It's important to note that this is a "community technology preview", so while we've put a lot of work into this, you're right, it's nowhere near complete. The underlying technology here is libgit2, which currently doesn't support ssh, although it's something that is being worked on. (We'll improve the error message, of course, for the future. We appreciate the feedback.)
I didn't really intend to sound like I was making damning criticism. After setup the process seems fine, and solving those issues only took ten minutes (but ideally they wouldn't occur in the first place).
It wasn't taken as such - and we appreciate the criticism, as we've got a fair ways yet to go. I was just wanting to set expectations appropriately since there's been a lot of excitement around here today and even we have forgotten that this is oh so very rough around the edges.
Libgit2 contributor here. The SSH transport is actually in progress, so this specific issue will go away fairly soon. I'll agree that the error message could be better, but my gut feeling is that most people will clone from within VS and use the HTTPS transport anyway.
This is very much a pre-version-1.0 UI, but I like the direction they're taking it. Really in touch with what their users want and need.
This is (a) absolutely shocking, and (b) utterly fantastic!
If MS had tried to make a DVCs to compete with git, they would have always been third-fiddle (to git and Mercurial, and possibly others). But they could still have made money selling it to all-Microsoft shops.
Instead, they acknowledged the situation and incorporated git support!
This is so right, so beneficial to their customers, and yet so completely opposite to what I expected them to do!
I must give credit where credit is due ... fantastic decision, Microsoft!
MS seems to be much more sensible in the Developer Tools division, i.e. .NET, VS, etc. There are several quite vocal open source supporters there and they seem to have enough influence that this part of the company does some quite good things. The Windows division ... not so much.
I am surprised that they are moving towards git and not Mercurial. Isn't Microsoft a sponsor of Mercurial? Perhaps this is because hg already has very good tools for Windows. I suppose I am just a little confused as to why git gets more attention than hg.
"When we made the decision that we were going to take the DVCS plunge, we looked at many options. Should we build something? Buy something? Adopt OSS? We looked at Git, Mercurial and others. It didn’t take long to realize that Git was quickly taking over the DVCS space and, in fact, is virtually synonymous with DVCS."
For me, this is primarily a reflection of the quality of service and community on Github rather than any quality intrinsic to git. I prefer mercurial to git, but find myself using git significantly more as virtually every dependancy in the apps I build lives on github.
I do think however that there are lots of teams, particularly in enterprise, that are quietly and happily using mercurial.
I'd bet this is simply a case of the right hand not knowing what the left hand is doing. I had no idea Microsoft were sponsoring Mercurial--even that seems odd, spending money on an open source competitor to a product they also make.
From a bigger-picture strategic sense, the Developer Division at Microsoft is concerned about making Microsoft a good platform for developers. Fundamentally, if developers want to use Mercurial on Windows or Git on Windows or TFS on Windows, we're happy. And - increasingly - we'll donate money or even developer time to help make this a good experience.
I've been an hg fan for a while and like it better (hg jives with my brain better and git occasionally throws weird problems at me) but quality wise I can't say there is much of a difference in my experience.
That said, I have never met a single other person who uses hg. Not at work or hackathons. Most have never even seen an hg repository and some haven't even heard of it. Git definitely has "won" this "war".
The problem is that large swathes of the Git community have been treating the whole DVCS scene almost in Hunger Games terms -- there can be only one winner, and all the others must die.
For what it's worth, I've heard quite a lot of anecdotal evidence that Git is pretty contentious among many teams that adopt it. Git adoption is often driven by an aggressive few, against the wishes of their colleagues who can be quite unhappy about it. Case in point: Git has more "hates" on amplicate.com than Subversion and TFS put together -- and "hates" outnumber "loves" by something in the region of four to one. (http://amplicate.com/hate/git)
(For reference, Git and TFS have roughly similar market share in the enterprise at the moment, and Subversion is about twice as widely used as either of them. Source: itjobswatch.co.uk)
I use hg for personal projects, but I agree that git has become almost a standard, and, as others mentioned, almost synonymous with the distributed version control systems. My feeling is the differences between git and hg are smaller than the advantages of switching from one to the other and reconditioning yourself to a slightly different work mode. (I guess I should have said "reconditioning myself".)
Microsoft initially made Codeplex work with Mercurial. I don't know why Mercurial was chosen over Git at that time, but perhaps it was because of Git's reputation for working poorly under Windows. I suspect that the sponsorship dates to those days.
Well, more than anything this underlines the incredible success of Git.
Initially I didn't think that it would take off, a SCM created by kernel developers for kernel development needs (not that this has to limit it's uses in other areas) with little regard for hand-holding.
Joke is on me, it's a runaway success and even Microsoft acknowledges this.
Until the Day Now Known as Before Git TFS (aka Yesterday), this was my workflow for checking in code (I work on a Mac):
* get far enough along in my code that I want to check in, launch VMWare Fusion, start Windows, login to Windows and run security updates, connect card reader, login to the VPN, oops bad password, login to the VPN again, more security updates, launch Visual Studio, connect to TFS, launch project solution file, check out my project, find the local directory on Windows where the files are stored, copy files from Mac to Windows, check in the project. Cry a bit.
And THEN: a couple hours after Git integration was announced, we moved a project I was working on over to Git on Team Foundation Service.
My new workflow:
* Make a change in the code, commit, pull, push. From INSIDE Emacs on my Mac. If you didn't know any better, you'd think I was just pushing code to GitHub.
> You can perform version control operations by using the Team Foundation
> Server plug-in for Eclipse. You can also use the Cross-platform Command-Line
> Client for Team Foundation Server to perform those tasks.
I'm not sure how Git support gets your code back into your corporate TFS server?
Edit again: did not see you using their service. It will be interesting to see how well they implemented all the ACL-type stuff, also just in general I wonder about the transport security since SSH is not supported at this time. I'd recommend against using this Visual Studio Git support to push over the internet for now!
Ironic. I doubt Visual Studio could even compile the Git source code because of their 20 years obsolete C compiler which is two standards behind (C99, C11). I doubt the Git developers feel hamstrung to support stupid compilers.
This is a great step. Now, will they stop alienating developers by omitting crucial tools like PIX and ATL/MFC headers from the free versions of their development tools? Or are they still under the impression that their platform is powerful enough that developers should pay them hundreds of dollars for the privilege of writing apps for it?
There's no way to change your wizard selections after project generation. You have to dig into the project settings dialogs and change the options manually. Or you could generate a new project with the right settings and merge your code into the new project. I'd say it's worth learning how to manually change the project settings, though, if you're going to be working with Visual Studio for any length of time.
The amount of code the wizard generates is actually remarkably small - most of the interesting stuff is happening in the base classes. You could probably create a new wizard project and merge in what you need, depending upon the options needed of course.
I know that Express isn't "aimed" at supporting legacy code. What I'm saying is, that's a bad product decision by Microsoft at a crucial time when they can't afford to alienate any developer who's still interested in developing for their platforms.
I'm not saying that Microsoft should actively support new development in MFC/ATL. All I'm saying is that they shouldn't delete ATL/MFC from the Express SKU.
It's true that it's still possible to get ATL/MFC in the free Platform SDKs for (much) older versions of Windows, but finding and installing it and integrating it with Visual Studio is a royal pain, and Microsoft won't tell you how to do it (or even that it's possible). And there's no way to get PIX for Direct3D 11.1 without paying $400+.
I'm sure that's their argument too. "Express isn't 'for' that." However, Microsoft is not really in a position now to dictate what developers "should" be using Windows/Visual Studio for. If they want Windows to remain relevant as a platform then putting roadblocks in front of developers is the wrong thing to do, even if the roadblocks are well-intentioned "steering" towards a development path Microsoft prefers.
But most people using the express versions probably aren't building these types of applications.. Also, the download size is important here. You can download the developer libraries GP refers to separately IIRC... Windows SDK etc.
The free Windows SDK doesn't include ATL/MFC either, for several versions now. You have to comb through the Microsoft Download Center archives to find an old download that happens to still have it. (For anyone who's actually looking, you can get it from, of all things, the Windows Driver Kit, version 7.1.0 and below only. http://www.microsoft.com/en-us/download/details.aspx?id=1180... )
Download size is not an issue because Visual Studio Express is primarily installed through a <1MB installer stub these days. You can choose the options you want and it only downloads what you select.
However, Microsoft is not really in a position now to dictate what developers "should" be using Windows/Visual Studio for.
Microsoft's developer tools have been a fairly lucrative part of their business. They are entirely within their rights, and with a robust, well-justified business case, segmenting the way they do. I would love to have everything for free as well, but the real world doesn't always work that way.
Intel sells their compiler suite for $1000 or so, it's worth noting. All so you can buy their chips.
Do you really find cmd.exe to be as good as OSX Terminal, or iTerm2, or the default Terminal on Ubuntu, or ??
I've used PS on Windows 7, not on Windows 8. So I'm sure it received an upgrade. I did like being able to essentially pipe objects from one script to another. But none of these apps are as flexible and powerful as the OSX and Ubuntu examples I mentioned earlier.
I'm really not one for internet debates, if you feel this protective of an under-featured and clunky terminal app then ok, good for you. You can have the last word. But the GP that you replied to originally was correct and your random half-answers are really missing the point.
Maybe conemu is what you want? Not made by MS though. I spend far less time copying and pasting into terminals in Windows, so the relative oddities involved aren't that bothersome.
I think cmd/powershell is different, and lots of people write it off without even considering it. Parent wanted something "developer friendly". That could mean anything to anybody. I can only give half-answers to a half-specified problem. :)
Sorry, if I wasn't very helpful. Far too often, it turns out "real development tools" means "exactly like on linux", and it's a waste of time to explain how to achieve similar features.
Install Git Bash and/or msys. It's really not that bad. I fired my mouth off about PowerShell before learning more about it. PowerShell in Windows 8 does come with powerful remote conenction abilities and if you grok what PS is, it's better in many ways than it's unix (bash, zsh) brethren.
I've been using Git from command line with my visual studio projects already. Integration with TFS is a welcome change. Although I wonder how much effort Microsoft has to go through to change TFS's underlying SVN architecture to support Git. I will be the first one to try this and report.
TFS doesn't have an underlying SVN architecture - Team Foundation Version Control (the centralized version control tooling that was the the only version control services available through TFS 2012) is unrelated to SVN. There are no changes here to support git - git repositories are first-class citizens, hosted separately from the existing centralized version control services.
But 'Bash for decades'? Is that true? A quick search for bash 1.0 shows me tarballs from '95, which isn't yet decadeS old.. Ignoring the question of whether bash is the optimum shell and that some systems migrated to replacements (think dash, all the zsh lovers).
So that last line of yours seems a little over the top.
Apparently libgit2, which they're using for this has the license:
libgit2 is under GPL2 with linking exemption. This means you can link to the library with any program, commercial, open source or other. However, you cannot modify libgit2 and distribute it without supplying the source.
That sounds nice to me, but I'm not sure how that differs from the standard LGPLv2.
LGPL requires that any distribution be able to replace the LGPL component with a modified version. If you use it as a shared library (er, DLL in this case I guess) you get that for free. But if, for example, you want to link a LGPL library statically you need to provide a static library containing the rest of your program in a linkable form.
The point is to allow the user the ability to exercise their right to modify the LGPL library and use it. The license you describe would presumably allow Microsoft to ship a binary with a fixed and unchangable libgit2 implementation.
For what it's worth, our analysis is that the ability to replace the libgit2 DLL is a requirement. (Not being a lawyer, however, I don't really remember the rationale here.) So, of course, we ship the source we used to build the DLL and you're welcome to replace it.
The sooner TFS goes away, the better the world will be.
(No, seriously. There is one thing that VCS should never do: lose your changes. Ever. TFS does this in some specific circumstances. Yet, despite this, and the other problems with it, people continue to champion it and use it because of the tight VS integration. The sooner this stops happening, the better)
At work we still use TFS2008 and there doesn't seem to be any way to migrate the source with history except for upgrading the whole TFS installation to a newer TFS. So when TFS2012 is up and running on another machine we are going to move a snapshot.
Contrast this to the freedom and "decentrialessness" when using git.
MS uses a custom internal-only fork of Perforce. They've used it for quite some time now. It was considered a "competitive advantage" so they used that while Visual Source Safe (lol) and TFS were used by the outside world.
I know, and it shows. This is exactly what the VS/TFS/Azure team must stop doing.
Edit: I can't reply to your comment for some reason.
If they do use their own tools, how come it's so easy to get insane merging problems on so many of the xml files in use by a VS project? Especially the .dbml files, but also the project files. :-(
As I have said (here) before, MS is a massive company so anyone making as statement like "MS does X" is almost assuredly wrong.
Devdiv, to my knowledge, as a whole (VS, CLR, etc..) uses TFS. The fork of Perforce the original commentor is talking about is probably Source Depot. When I started at Microsoft we (VS) were still using Source Depot. As I understand it the primary reason was because we had been using it forever (okay, well back to the SLIME days) and there was a massive amount of history in there. Porting it all immediately to another source control provider was a large task. I 'fondly' remember the transition during development of VS 2010. Though to be fair the TFS team was great aboout finding/fixing issues exposed by suddenly onboarding the entire VS team and our possibly 'interesting' source control requirements. I believe Windows still uses Source Depot, likely for similar reasons (the amount of history they have in SD makes the devdiv history seem like a tiny blip).
As for merging problems, I don't know. We routinely merge huge branches with hundreds of thousands of files, including many project files, other XML config files, etc.. and I don't recall many 'insane merging problems' (just your garden variety merging problems that occur with that many files). Then again I am not intimately involved in the merges (other than for occasional fire-drills on files I may have modified). I suppose it also depends on what the changes were on both side of the merge.
Both the Visual Studio and TFS teams use, obviously, TFS for version control. Parts of the TFS team are using git and some of the team is using git-tf, but mostly it's straight up TFVC. But the VS and TFS teams do not use source depot (the aforementioned internal-only perforce-like tool.)
Edit: as for merging XML files -- TFVC uses a standard automerge algorithm, very similar to the one git uses. I'm not sure why you're having merge problems with your XML files, but I'm also not sure it's TFS's fault. But it would be interesting if you filed a connect bug for us to take a look at!
.dbml files are a pain to merge because on any change, that element is moved to the bottom. To avoid other people smashing my changes I manually move it back to where it was before checking in, so the diff is only one or two lines. No one else on the team does this though, so they still run into problems.
I tried to find where the 'Startups for the Rest of Us' AuditShark guy (Mike?) talks about writing a tool so that his outsourced devs wouldn't have to deal with it, but it would take me too long to find. He doesn't seem like the type to believe in open sourcing anything but I wish he would consider it.
The developer division has used TFS for over four years now. You can read about the scale and topology at http://blogs.msdn.com/b/buckh/archive/2012/06/08/developer-d.... Ryan was being nice when talking about the transition. We put the division through hell for a while (fall of 2008 until spring 2009), but the result was an amazingly scalable centralized version control system. Now with our full, native support for git in VS, on the server (just on the service http://tfs.visualstudio.com for right now), and joining the community to help build libgit2 (check out the committers page), TFS also supports the best DVCS.
I've been using git on Windows for years, and while the need for msys stuff is a little clunky, I've never had any problems using it. In fact, the built-in gui tools (git-gui and gitk) work WAY better on Windows than they do on OSX.
libgit2 will eventually make it possible to make a nice GUI client without shelling out to git itself. GitHub for Mac and GitHub for Windows both use it now, and it's way better than parsing shell output.
I've been using git on windows as well and it works for sure but it's obviously not a native windows tool. It doesn't integrate with any windows stuff (credential management, indexing service, scripting objects, etc.) it brings all of its own stuff from the unix world (ssh, grep, bash, etc) and depending how you set it up all of those things run in a weird emulation sandbox (where's your .ssh/config? your .bashrc, .gitconfig?). If you use egit in eclipse and msysgit you probably have had to copy your ssh keys and host settings in two places. When I use it from the command line I'm switching between windows and unix style paths (msys and cygwin both have a path wrapper utility but I'm using git on SUA for best performance).
I think a proper windows git would entail a proper windows ssh where your keys and host settings are managed in one place in the control panel or something (or at least %USERPROFILE%\.ssh\config) and would have a seamless integration with explorer and the filesystem so that other tools can leverage it transparently similar to how ssh is used transparently by so many unix tools.
Git on Windows works fine for me, no cygwin in sight. It wouldn't have occurred to me to call msys 'part of a Unix interop solution', though I suppose it does give me a working version of Perl, which is a feature since it was the occasion for me to start using ack instead of grep.
It's not as bad as it sounds, and you don't have to do it for all scripts.
1) Most scripts will work, you just have to tweak how loops are written and the way you declare functions.
2) If the script uses basic commands like ls, mkdir, etc their are powershell aliases built in already that point to the equivalent PS command.
They have a long list of what's wrong with git.
They think git is too hard for their average customer.
They have a list of features, some will be easier when storing some metadata in the git repo.
There are certainly ways to build a compatible and good product.
It is possible and not incredibly hard.
But it is Microsoft we are talking about.
What is their track record in playing nicely with the community and supporting open standards? IE? OOXML?
They have deadlines, technical burden of millions LOC, backwards compatibility with whatever crap RCS they are currently supporting, etc.
When all these come into play, guess what will be sacrificed or postponed till next release?
I get it that it can be done right, I believe some of their engineers sincerely want and try hard to get it right. They have already made quite a few design decisions and big steps in the right direction.
But there are "real world" constraints. Management wants to report gazillion new features this quarter, Marketing wants unique value propositions, Engineering wants to hold back the release until git maintainers accept all the patches. Guess who will lose this tug of war?
straight from the horse's mouth:
> Git can be, um, esoteric. We’ve been working to
> codify the standard “best practices” for Git in the community to
> make Git approachable and easy to use
> by everyone while not sacrificing the power.
> give you the best all-up ALM solution
> work item association, change tracking, build automation,
> My work, Code review
> We are doing work on auditing, access control,
> high availability, online backup, etc. All the things that
> an enterprise is going to be particularly concerned about.