FYI, from the announcement. Probably the most important part at the end, italicized by me. Also note (elsewhere) that licenses are MIT and Apache2:
.NET Core Tools Telemetry
The .NET Core tools include a telemetry feature so that we can collect usage information about the .NET Core Tools. It’s important that we understand how the tools are being used so that we can improve them. Part of the reason the tools are in Preview is that we don’t have enough information on the way that they will be used. The telemetry is only in the tools and does not affect your app.
Behavior
The telemetry feature is on by default. The data collected is anonymous in nature and will be published in an aggregated form for use by both Microsoft and community engineers under a Creative Commons license.
You can opt-out of the telemetry feature by setting an environment variable DOTNET_CLI_TELEMETRY_OPTOUT (e.g. export on OS X/Linux, set on Windows) to true (e.g. “true”, 1). Doing this will stop the collection process from running.
Data Points
The feature collects the following pieces of data:
The command being used (e.g. “build”, “restore”)
The ExitCode of the command
For test projects, the test runner being used
The timestamp of invocation
The framework used
Whether runtime IDs are present in the “runtimes” node
The CLI version being used
The feature will not collect any personal data, such as usernames or emails. It will not scan your code and not extract any project-level data that can be considered sensitive, such as name, repo or author (if you set those in your project.json). We want to know how the tools are used, not what you are using the tools to build. If you find sensitive data being collected, that’s a bug. Please file an issue and it will be fixed.
I choose to ignore this post so five weeks from now I can write an outraged "Microsoft is TRACKING YOU" article on Medium that will garner me praise and upvotes.
And 100x quicker than setting the clearly documented environment variable that disables the feature.
> You can opt-out of the telemetry feature by setting an environment variable DOTNET_CLI_TELEMETRY_OPTOUT (e.g. export on OS X/Linux, set on Windows) to true (e.g. “true”, 1). Doing this will stop the collection process from running.
If it were opt-in, they'd collect significantly less data, and the data they do collect would likely by heavily skewed. Microsoft could be misled by the poor quality of data collected, and an opt-in system could actually be worse than not collecting any data in the first place.
The way I see it, at the end of the day, the decision for Microsoft is really between not collecting data and an opt-out system. If Microsoft chooses not to collect data, then all developers have to live with tools that improve slowly and have issues (possibly security related that could be maliciously abused) that are not fixed as quickly as they could be.
If Microsoft chooses an opt-out system, they can collect the data they need to make sure their tools are working optimally and as intended. Some developers may not be comfortable sharing how they use Microsoft's tools even with no personally identifiable information collected. These people can opt-out while minimally compromising the quality of the data collected. Additionally, the tools are open source, so any developer that's skeptical of how and what data is being collected by the tools can verify Microsoft's claims.
Those are the two options I see. To me, the cost/benefit of the second option greatly outweighs the cost/benefit of the first for all involved. By not collecting data, security issues that could actually compromise your privacy could go unfixed for longer. By collecting data through and opt-out and open source system, Microsoft can fix issues ASAP and developers can verify that data is collected in way that preserves their own privacy.
It seems like a lot of people are knee-jerking to the idea of collecting data through an opt-out system and not actually weighing the cost/benefit of the realistic options. Can you explain how not collecting data has a lower practical cost/benefit ratio than an opt-out and open source system?
That doesn't refute my point. No data collection still leaves a greater probability of issues being left unresolved for a longer period of time. Also, the code is open source. You can see exactly what data is being collected.
To address your point, it's not possible to be aware of the benefits you're missing out on without data collection.
For me the issue comes down to whether or not Microsoft does anything useful with this data (probably not, if 20 years of NVIDIA blue screen driver failure logs, Windows 8 and OneDrive are any example of how 'big data' impacts Microsoft product quality) versus how many comments I have to read where joeblow52 is personally offended that Microsoft dares to learn what his compile time plus 999,999 other compile times, divided by a million, equals.
How, exactly, are you thinking that Microsoft is going to fix nVidia's buggy drivers? They can collect all the data they want, but at the end of the day, it's nVidia's driver.
I worked there. Ways we solved these sorts of problems include: hardening the other side of the API/HAL when appropriate/possible, simplifying the driver model so that mere mortals could write drivers, writing our own drivers and overwriting known buggy ones for companies that couldn't get their shit together (usually network vendors), adding workarounds to the OS not to use certain features of certain cards, flying external engineers to lavish parties and our driver development labs and compatibility labs and providing one on one engineering development assistance from senior kernel developers, providing free testing of drivers for known problems before release, rolling fixed drivers into Windows updates, providing marketing funds as reward for fixing problems, and not using NVIDIA in the Xbox 360 after using them in the original Xbox as punishment because they were personally responsible for over 80% of blue screens in Windows for the preceding five years.
Sadly the motivation was often to ignore the data or watch it get spun by some jackass with the exact wrong agenda. It's just software, there's always a way to fix things if you really want to.
I just installed these tools and it tells you on first invocation that telemetry is enabled and how ti disable it. I think that buys a little bit of good will. I am also opposed to telemetry by default, but I understand it, and appreciate the opt out message being presented clear and up front.
Microsoft is looking to do more data-driven design; this is the reason for all the telemetry in Windows as well. Raymond Chen pointed out an example where a button was removed File Explorer (prime real-estate) because the telemetry showed that hardly anyone ever pressed it.
It's unfortunate they are so tone-deaf about the PR implications in Windows.
We have removed all the weapon deployment buttons from our nuclear subs, silos, and bombers, because telemetry shows that they are typically only pressed during unit testing.
Since we only have two recorded events of someone deploying a nuclear weapon in a real-world scenario, it should be safe to relocate those buttons off the main console, and just use the auxiliary functions button, behind the kick panel, next to the row of DIP switches used for selecting the button press function. Since "reorder coffee pods" was the last auxiliary function to be added, "launch nukes" will be next, so make sure you double-check those switches before pushing, coffee drinkers!
You're joking but I'll give a serious reply to why your analogy is off. This is why they have war games. The telemetry from those games would be included in the analysis.
My point was that user interface patterns do not always indicate the importance of the control, because there may be multiple modes of operation (such as "Standby", "War", "War Game", and "Readiness Drill") that might not be obvious--or even apparent--from the raw button-press numbers.
You want the "War Game" telemetry to count for the "War" design, but you can't really tell which is which without having additional data that you shouldn't be allowed to see. If you detect "casual user" and "programmer" modes in your OS, the user shouts, "I don't want my computer knowing (and sharing) that much info about who I am!"
Even though most people aren't likely to ever need to use a fire extinguisher, you really want that interface to be simple and accessible. If you decide that because no one ever used this particular fire extinguisher, it doesn't need to be there, that's a decision that could make the difference between life and death. Moving a button on a form doesn't seem quite that serious, but it could still cause people to lose time or money, which adds up across all the people using the software.
I know this is just meant in jest, but because the weapon deployment buttons are something you never want to be pressed on accident, ideally you really do want to obscure them behind layers and layers of user interaction.
If anything we had the problem of them not being obscured enough for a while (if the story of the launch codes being 00000000 across the board is true).
I always suspected that there was probably significant overlap between users of macros and users who opted out of telemetry. Lesson learned: leaving "yes" checked from now on.
Data collected by a neutral third party without conflicting interests pushing for creative monetization opportunities that's responsible for sufficient annonymisation/aggregation before releasing the data?
The Ribbon is, in my opinion, the use interface equivalent of a broken pull tab on a can of corn. I know it is there, I know that I'm supposed to use it, but I can't because it is broken.
Backspace is "back" not "up a folder level" as far as I am aware. This is consistent with web browsers - and indeed the conceptual meaning of "backspace" (go back 1)
So random question: I like when MS people come out and engage the community like we are responsible adults and not like moronic children who need hand-holding and cannot decide for ourselves.
Since we all know the famous "Open Source is cancer" reign of Balmer oft-joked of here, I assume the related issue at hand is culture shift is good and less of a concern now, as opposed to a few years ago. But I am super happy when people like you drop by.
How do people like us on HN, Reddit, and Ars get you and those like you to say, see daveguy's manager[s], we will reconsider our avoidance of the MS tech stack because how daveguy did X?
Is this silly? I am not sure. But I wonder if I am the only one who wants people like yourself to thrive so MS keeps going in this direction.
The best way to make sure they (we, since I work at Microsoft) get the feedback is by finding out where the product is listening.
A lot of products have UserVoice set up, a lot of dev product teams (dotnet, asp.net, etc.) are looking for GitHub issues.
For Windows 10 I'd recommend using the Windows 10 Feedback Hub - if you hate something about Windows 10, post feedback in the Feedback Hub before you uninstall it.
There are lots of us who see your comments on HN / Reddit / Slashdot / etc. but having been part of the Microsoft ecosystem (both inside and out of Microsoft) for many years, I can tell you that there's a huge difference between "I've seen some unhappy comments about X" and "Our feedback numbers show 68% of users want us to improve feature X". Also, specifics are important - a detailed issue report with repro steps and recommendations for what you'd like to see work a whole lot better than "@drunkvs LOL VS suxxxx ikr!!!"
Also, just because I'm an ocd fact checker, Ballmer technically didn't say "open source is a cancer". He said that Linux is a cancer, referring to the strong copyleft license requirements (https://web.archive.org/web/20011211130654/http://www.suntim...). [yes, I very well know that you can run software on Linux without that software being GPL. I'm writing this from Red Hat DevNation :)]
Regardless, I can tell you that our current CEO and everyone I work with is excited about writing great software that runs everywhere.
Well I tried that once or twice in the Connect era. But if you say it will make a difference this time, next time I have issues I will contribute.
And yeah, that cancer line was meant to snarky and cute, not factual. It does not really matter more than me indicating the obvious, that this is a big culture shift if you look at the timeline. I think MS has ways to go too, and the long game is where we will see how it fares, but I want to make sure we encourage the good. Or it will languish, as "no one cared after a while."
Yeah, please try again. The problem with a lot of Connect programs is that they were tied to a specific release (MonkeyManager Pro 2008) so bugs that didn't make the cut for the product release just got lost in space (not picked up for MonkeyManager Pro 2010). Part of the reason was that there were always internal bug tracking systems, which was what the dev team was really working from day to day, and these other systems were just inputs to that.
If you look at any of the projects that are on GitHub now, the public issues list is the issues list. You can see all the dev comments, scheduling via labels, pull requests, code reviews, etc.
For products that aren't on GitHub, public issue lists like UserVoice have still been a big improvement (in my opinion) because they usually keep the bugs / issues / feedback around until it's fixed, and the votes accumulate so the important issues keep bubbling up.
I had a couple of reports on Connect related to crashes in Edge on Xbox disappear as part of a migration - the best I can work out they've become private/internal. I'm assuming those kinds of things are going to UserVoice and not GitHub but is there some way I can find out what happened in the end?
I am looking into ways to move everything we do off of MS Tech completely because I don't trust their direction with Windows 10.
I posted this because I thought it was an interesting and relevant part of the announcement. If MS was this straightforward and transparent with Win10 (allowing easy disabling of telemetry and the option to control upgrades at all levels) then I wouldn't be looking at alternatives. Although off by default would be better, but they would get approximately zero feedback with that. Best would probably be ask on install with a checkbox easily unchecked.
Whoever decided their .net tools telemetry and messaging needs to be in charge of that aspect across all products.
You and I are on the same page. What concerns me is that really the best work on securing core internals of the OS from long standing problems, such as strengthing LSASS and SAM against pass-the-hash with virtualized, isolated processing shielding and HW age I will be forced to it already. All the talk of Corona scares me, and not even CIS and the big players have guidance since MS has the new attitude of "just trust us already." Their announcing, even with Enterprise SKU, you cannot disable Windows Store, worries me. In my personal life, Windows is something I hardly ever use. But work and Windows 10, being shoved down into the deskop market, is scary. The Cred Guard stuff and improvements are wonderful, but their increasingly we're the cloud and you're the peon attitude also terrifies me.
Well, until it is not interesting for them as a PR strategy. I do not want to lose these people when the pendulum swings back the other way.
And you and I check Github. That does not mean anyone being engineering managers care. I am curious what else we can do beyond starring GH repos. I do that, I just do not like my lazy armchair form of support and worry I will lose a culture shift at MS I have quickly fallen in love with after years of reviling the company as a whole, short-sighted or not.
Purely subjective and observational... but I had a few MS guys go out of their way to communicate a bug in probably the most popular node.js driver for MS-SQL (tedious), and it was interesting... There wasn't as much updated on the Github issue, but I was included in on some of the communication issues, and they were pretty open about it.
The person that had poked in was iirc, and Azure developer in MS, not from the MS-SQL team. There are plenty of developers at MS that do follow various GH projects (as happens everywhere else) and will get the right people involved when they see things.
I would presume that by pushing these kinds of projects (.Net Core, etc) out to github, and even the documentation likely won't see things close off any time soon, and not without losing more mindshare than Oracle has caused people to move away from Java.
I don't think the previous comment is suggesting it is bad. I think they're trying to stave off any "OMG! MS is looking at your porn!" posts, by highlighting the privacy policy of the telemetry; which sound reasonable to most people.
Thank you. I wasn't suggesting it is necessarily bad. It looks like the openness and ease of disabling is being handled much better here than with Win10.
However, the option to disable on install (checkbox prompt for preference) would be much better. This is an improvement over the Win10 approach.
Sorry if I took it the wrong way and jumped the gun too soon.
Microsoft is by no means an example of sainthood, but many in the HN community tend to only criticize it, while closing the eyes to similar actions of companies that are closer at heart.
Yes. It's bad when Microsoft does it this way because it's opt-out. IntelliJ/Android Studio prompts you and asks you your preference first. I think Eclipse is opt-in but could be mistaken.
Homebrew is opt-out post-install by setting a flag as well (no option or warning pre-install). But they do remind you afterwards that they are tracking and give you a link to learn how to disable it.
As long as no data is sent until first use (giving you a buffer to opt-out), I don't have an issue with this.
The telemetry was in the original preview release of the tools with RC2. It can be turned off and I believe it won't be in the final stable CLI tools release, but that may change.
It's the same with VS and VS Code (and Atom too). There has been a move from opt-in (for older versions of VS) to opt-out. Although I do still see connections to MS domains with all the customer experience stuff turned off.
I simply don't want to be tracked in any form regardless of whether it collects my personal data. Yet it seems more and more impossible these days. It's like "If there's nothing about collecting your personal data, then you should compromise".
Sorry to tell you this 4684499, but this website and most every other website on the internet knows how many page views they have and the IP addresses that are used to get to their website, etc. ...all because they have telemetry on the server. It's how they tell if something is working or not working. Not all telemetry is bad.
Well, it does track, but it doesn't necessarily collect the data and use it to profiling users. I just don't like to be experimented like a little white mouse in a mouse lab, regardless of whether it's for good or bad.
A couple months ago I installed .NET core on a Ubuntu virtual machine running on the Windows 10 hypervisor, and was able to get a MVC5 app running using Visual Studio Code. As someone who really loves Visual Studio (it made me expect a lot more from my tools) and C# (it made me expect a lot more from my languages), it was an exciting moment. I actually took a selfie with my monitor.
It was still a little rough: the "getting started" instructions ONLY worked on Ubuntu 14.x and not Ubuntu 16.x, and my PR to the documentation pointing this out was nixed. (I notice they've since added a disclaimer: https://docs.asp.net/en/1.0.0-rc1/getting-started/installing...). I really hope to someday be able to build projects with React and a .NET core WebApi and be confident that my teammates will be able to get the project running on their macbooks without kms.
MVC 6 is the version that works best with Core, although they don't really refer to it directly much any more. Both MVC 6 and Web API 2 are now part of Core. They're just packages like everything else.
MVC 6's name is now also ASP.NET Core MVC 1.x.x. It's really a version number reset. The confusing thing is that it's not plugged through everywhere and people interchange the two regularly.
#wellactually We don't talk about ASP.NET MVC as a separate thing anymore. It's just ASP.NET Core. So you might build an MVC app on ASP.NET Core, or build APIs on ASP.NET Core, etc., but it's all just ASP.NET Core.
Personally, I like the new "one true name". However, most .NET developers know what ASP.NET MVC is for, but are less clear about what ASP.NET Core is about. Hopefully this will change over time but, for now at least, MVC is still a useful term from a marketing perspective, or at least that's what my publishers tell me. :)
I actually used that version in my recent book. You can see it in this section [0] of the first chapter (which is free to read, with no sign-up) that tries to explain the confusing naming.
That tweet dates from 2010 and the same quote is on this page [0] dated 2009. Of course, it's possible Martin updated it after the initial publication. I'm not saying you copied it either, it could have been independently created.
Visual Studio Code is pretty neat, and it's plugin system is very powerful: more and more languages are adding amazing IntelliSense and interactive debugging support for it. It's like Atom, if Atom was faster and focused on exposing nice APIs for autocompletion!
I've played with VS Code a bit, it's a pretty cool tool for small scale scripting, but it's not really a replacement for a full fledged IDE.
Hopefully Jetbrains will release their cross platform C# IDE sometime soon. I would prefer Visual Studio, but I don't think that's going to happen (at least not anytime soon).
Not being fully fledged IDE can be good and bad. I'm using VSCode for most things I use VS for. It's super fast to launch, great language extensions, debugger, integrated terminal and tasks. It's like sublime and visual studio had a baby. I have licenses for jet brains but I seldom ever use it.
I may also be a bit biased since I contribute to VScode extensions. But I see that as a positive. I use a free editor that I can hack, look at its source and collaborate in the open.
F# doesnt work ootb with sdk preview2. Work ok with preview1 of SDK (win/ubuntu/osx/docker), but preview2 has a bug, the fix is in progress ( ref https://github.com/dotnet/netcorecli-fsc/issues/12 ) and will be published a nuget package with fix soon.
You reported the issue 11 days ago. I'm surprised they just announced it with such a basic use case being broken. It doesn't bode well for F# as a first class citizen in their ecosystem.
F# is not a first class citizen. They pay it lip service because it's a far more advanced language and makes MS look like they're on the cutting edge. Plus the team that made it is responsible for dragging the CLR into the modern era (or into the 60s) by bringing generics. And showing off important features on .NET such as quoted code, async workflows, F# interactive. But a simple look at tooling and language announcements shows that the F# team is very underfunded.
> The results of the 2016 Stack Overflow Developer Survey are in! F# came out as the single most highly paid tech worldwide and is amongst the third top paying techs in the US
Yes and it's clear from what the F# team has said that the hope they have for the future of the language is that the community will invest.
That's fine, but let's not think that this will produce tooling anywhere nearly as refined as C#'s stuff. Take F# interactive v the C# one. F# has like a decade lead. Yet the C# interactive editor is smooth, polished, even has VS project integration, something the F# team had thought of doing many years ago.
Non F#-team members[1] have said that internal politics are the issue here. To the point where some books were ... edited ... to paint C# in a better light, relatively. MS's marketing reflects this. My guess is they're too proud to admit their flagship language from their high-profile hire was shown up by what was a research project. And that the CLR's arguably biggest tech advantage over the JVM (generics) was also only done through the intense efforts of MSR; that MS Corp was against it.
It's sad, because MS is in a position to really elevate the world's programming consciousness/ability by really promoting F#, yet it's still a novelty for, as MS has said "scientific and engineering" applications. Yet, apart from tooling/legacy, F# handles every case C# does in a better way. At worst, it's C# with lighter syntax.
Oh well. At least it's there, works, and has some level of support. Only reason I consider using .NET these days.
1: The F# folks are amazingly polite and I've never heard them even hint at a complaint about MS.
The finance industry where F# shines is willing to invest, I suppose. OCaml e.g. is backed by Jane Street and F# is even simpler in that regard because the hardest part is efficient runtime which is for F# is .NET - already done well by MS. But what some corp willing to invest in tooling for F# could do while MSVS is closed source and is not most transparent IDE in the world to say the least wrt plugins.
> too proud to admit their flagship language from their high-profile hire
That's acute while we know that high-profile hire's past victories (Turbo Pascal, Object Pascal, Delphi) were never about the language, but about incredibly polished IDE, compiler, libraries and runtime.
> yet it's still a novelty for, as MS has said "scientific and engineering" applications
MS marketing wisdom is overrated, to say the very least. Look at Tablet PC. Windows XP Tablet PC Edition released in 2001. Ink APIs are all way through Windows SDK since back then. And they never realized that stylus thing is something more than just a 'uhm, you can draw a kitten, maybe?'. At least they never articulate anything more than that in any promotion campaign. And now Apple released iPad Pro and will eat the TabletPC-Covertable-Surface market.
Bug was found too late, near code freeze for rtm, not enough time for fix.
fix is in progress ( https://github.com/Microsoft/visualfsharp/pull/1290 ) probably tomorrow, it's not a big problem one or two days of delay. Also because that's an sdk issue (preview2) not of .NET Core (rtm)
The f# support is in beta, c# was ahead obv, but that's ok i think, it's used as the language for the corefx, etc. vb is not working at all atm.
What i really like it's how the sdk it's evolving, using modularized components it's possibile to fix/improve/evolve the f# compiler/library, without waiting a new version of sdk. That's really good.
In the next sdk or rtm also the `dotnet new` it's going to be updatable, so np about template too
Seconded. I saw the announcement and immediately went to grab the docker image and give it a shot with F# via "dotnet new -l F#" and go. It restored and built, but didn't run. So v1.0 is broken, I guess?
Still a little odd for a release announcement. It feels disingenuous to only afterwards make the fine-grained distinction that yes the libraries are technically v1, but the tooling is still in preview.
Kinda like announcing your new line of cars is ready for purchase today! Except that down in the fine print you might find out that the engine is ready to go, but the steering wheel, headlights, dashboard, and pedals are all still in development and so it's justifiable that they're broken.
Was like that also in the previous release.
The .net core was rc2 and the .net core sdk was preview1.
I use it, and i think it's ok.
The sdk it's just a build system. the real value is the .NET Core, that mean the coreclr (the virtual machine) and the foundation libraries (corefx).
I can change the build system one month from now, to build a project using the old packages.
It's not a car (engine vs wheel).
It's more like food vs marketplace. the food it's the real deal, the markeplace may be incomplete (no parking).
After so many years microsoft finally presenting their tools in a more moden way. Never been a huge fan of .NET but i can't deny is a great tool hopefully people try it out on more platforms.
.Net is 14 years old now - if you don't know what it is by now you're probably not ever going to know. It's a cross platform runtime and a group of programming languages that run on it, just like Java.
The general idea of .NET as a platform is easy to understand. The naming of .NET Core, .NET Framework, ASP.NET, etc. and the difference between them is not.
My point is that it's just hard to understand what exactly one means when they're writing ".Net". Do they mean the entire framework, a specific library, the CLR, one language, etc. For whatever reason, the name has a very contextual meaning, and people use them interchangeably (".Net Core" is clearly specific though).
Not that they're alone. Adobe did worse with "Flex" since it could refer to the compiler, a framework, and an editor at one point until they decided to make it a bit more standardized.
Java JRE/SE/JDK... It's really not much more confusing than what others use, and as to what you need to install for something.
.Net core apps should be a portable application (portable as in the runtime is compiled in)... Yeah, it is a little confusing, and hopefully removing some of the separate terminology will help. A lot of what has changed, is that you will likely be developing .Net Core (or Xamarin apps) that will target a given platform for running in... Most of the rest should be cross-platform modules that install via nuget (platform/language package manager) and bundled with the application output.
> Java JRE/SE/JDK... It's really not much more confusing than what others use
That's not saying much; "Java" means so many different things it's enough to make your head spin. At least these days it isn't a stock ticker symbol any more.
Java is confusing too so being just like Java doesn't mean it's easy to understand. (Do I need the Java runtime environment, the SDK or the browser extensions to run this code?)
That's really only half of it; Java is also a programming language, a bytecode spec, and a binary executable that sometimes points to a JRE and sometimes refers to a JDK, depending on how you installed it. Arguably it's also now being used to refer to the API of the standard library too.
The naming is very confusing but it's better than the old name of ASP.NET 5. Source: I had to write about this for a book and it was a challenge to explain it clearly. :)
I went full circle, from a critic how they cloned Java back when my employer had the privilege to beta test .NET as a Microsoft partner, to someone that enjoys delivering solutions on the .NET stack.
For me the sweet spot will be when .NET Native becomes more mature and I can get Delphi/Modula-3 back.
C# was originally designed by Anders Hejlsberg, the C# 1.0 features that weren't taken from Java (J++) were based on Delphi, like Properties.
You also have RAD development on .NET.
Although .NET always had AOT compilation via NGEN, it always depends on the runtime being available at OS level.
With .NET Native it is like things were on PC before JVM took off, with strong typed system languages like Delphi and Modula-3 that compile directly to native code.
I know one needs a bit of imagination to make the comparison, but it is how I like to think about it.
i can't speak for the author, but my first thought was the speed. Delphi was almost as easy to develop as Visual Basic, but the result was fast. a native EXE, no big framework-dependencies.
Have you tried writing a complex wep app/api with C#? It's an absolute joy to use. Python still has it beat for math and ML libraries, but for getting up a quick web app with robust (and easy) data queries (LINQ) it's pretty unbeatable.
It's fascinating how Microsoft's marketing moved from ".Net - One platform with multiple languages" to "This is .Net - here some samples in C#"(without mentioning C# by name anywhere in the page).
I think that .Net would be much better if they had this kind of focus in dev side from the start.
If I'm an experienced Python developer who develops a lot of websites and APIs using Django or Flask and SQLAlchemy, is there any reason to try this stuff? C# is a great language, but what about the libraries? What replaces Flask? What replaces SQLAlchemy? How do I deploy? Looking for some practical reasons to invest time on this if Windows development is not in my roadmap.
For me, the GIL is a feature. Proper pthreading is for systems software as I see it. If you're rewriting Apache or IIS then you may need to bring in Rust, C(++).
I never found platforms like .Net and Java to be low level enough to write systems software, where you'd definitely want multithreading, and not high level enough to be as convenient as Python.
I write in Python and then use PyPy for more CPU intensive services. If I wanted multithreading and instances won't cut it, it would be a pretty hardcore usecase (for a lone wolf like me), I'd most likely also need no GC so I'd reach for something like Rust rather than C#.
I hate to say "no" to learning anything, but to be the devil's advocate this is how I've always seen it.
The performance of IronPython was horrible and Python being an ecosystem of libraries built in C and wrapped in a Python API, it makes alternatives to be unusable, since most Python libraries that matter are incompatible.
No. If you really want a high performance Windows friendly server software, try Java. But honestly I would not use .net for anything past excel plugins.
It's really just a copy of their informational web site content with articles by different authors, but still very conveniently combined into a single PDF useful for printing etc.
I might be missing something here, but does this mean that .NET Framework and .NET Core have diverged, and you need to take extra steps to keep code compatible with both?
The major differences between .NET Core and the .NET Framework: [...] APIs — .NET Core contains many of the same, but fewer, APIs as the .NET Framework, and with a different factoring (assembly names are different; type shape differs in key cases). These differences currently typically require changes to port source to .NET Core.
While I understand the motivation, this at first sight looks like something that will be with us for a long time, and could make life more difficult especially for library authors, who need to potentially target both 'platforms'.
[Disclaimer: haven't used .NET technologies for a very long time and might be horribly wrong here]
I've mixed feelings about this release, because it seems a bit early to me. Some fundamentals will change soon (project.json replacement) and I am still not able to get debugging to work in VS Code on OSX. The whole situation in the .NET Core space was very confusing for the last months and this trend seems to continue. This is sad, because I'd love to code in C# on any platform and that things would just work as advertised on the website.
Edit: Debugging works now for me, after the .NET Core Debugger was automatically installed in the background!
I also think they've made a terrible decision, the first-class support for DI is a bad direction which has made many simple concepts like config settings into absolute farces that take 5 lines of code.
It's code "purity" over usability. It's putting the core dev team's principles over their customers actual need, completely violating KISS, DRY and YAGNI.
DI really is our generation's factories. I'm already seeing projects written by people who don't understand it making utter nightmare spagetti code, worse than any code I've ever seen.
It's nasty scaffolding code which is a symptom of limitations of the language, definitely not code anyone should ever actually be wasting time writing.
Instead of recognising it's a flash in the pan, the core team have embraced it and are trying to force it down everyone's necks, and a lot programmers simply don't get it and are making an utter mess instead.
I'm not sure that the tooling will necessarily need to be updated as the rest seems to be nuget packages, that should be easy enough to update as you build an application.
I'm more of an outsider now, as I've been doing far more node/js dev lately than .Net ...
Is there a primer somewhere that explains simply what the difference and dependence is between .NET (core, foundation), ASP.NET (core, foundation), Xamarin, Visual Studio (in all it's different flavors including VS.NET etc.) and Visual Studio Code?
I tried Wikipedia, and Microsoft's own homepages for each, and am even more confused.
I'm a hierarchical thinker, and a hierarchical tree explaining the above would be very very helpful.
Disclaimer: I haven't downloaded any of these yet. I usually don't until I've understood something.
Bonus question: If I want to develop websites and electron apps using: HTML, CSS, Javascript and PHP, what's the minimum set of technologies (or whatever it is that Microsoft is calling them these days) I will need?
Dotnet Foundation = NGO that manages various legal and org things for .NET. Think FSF for GNU.
.NET Core = cross platform software development platform. Includes a VM, compiler, tons of libraries.
ASP.NET Core = HTTP server + server side .NET libraries.
There is no ASP.NET Foundation, it's all part of the Dotnet Foundation.
Xamarin = .NET libraries for mobile development. Wraps Android/iOS native libraries.
Visual Studio = native Windows IDE for .NET and other languages.
Visual Studio Code = portable, Electron/HTML 5 IDE for Javascript/Typescript and other languages.
You probably want Visual Studio Code for HTML/PHP. But you should check out .NET Core, it can replace PHP.
By the way, if it works, it works, but I find it better to research and do at the same time. Downloading only after you've understood the org chart seems a bit too radical for me :)
.NET Framework is the old .NET Framework. It's basically the entire shebang, tied to Windows: VM, class libraries, etc.
.NET Core is as the name says, just the "core" of the .NET Framework, the part that's cross platform. However, I'm not sure that at this point the code's common, I believe at least a part of it has been reimplemented. I think that in the future .NET Framework will be based on .NET Core. Therefore .NET Framework will be .NET Core + Windows specific bits.
WPF is the fancy name for a .NET UI toolkit for Windows, basically. Think of it as a Windows-only GTK.
Does Microsoft have some diagram or something that shows how all the .NET things fit together (on Linux)?
The C# development I've done was... fine, but that was seven years ago and it's like Java in the sense that it's an amazingly complex thing to grasp and keep up with. Understanding how everything is layered is pretty complicated at this point.
Interesting name choice considering they have already had a .Net 1.0. I mean I get it, I understand why they chose to make this more of a 1.0 release but for those already in the .Net ecosystem it seems a little confusing to me.
It could have been more confusing by bumping versions. They first tried ASP.NET 5 and ASP.NET MVC 6 and EF 7 and there was no simple migration path from older version. Now they are all Core 1.0
Excuse me if this was already obvious, but the word "core" designates a new product line, starting from 1.0. Obviously, it's informed heavily from the old .net framework, but it's not backwards compatible with it.
This is just a personal view, but my first instinct when I read about product "CoolApp" and "CoolApp Core" would be to assume core is a subset of the first. Not a new and re-engineered product line.
I could see why others might see .Net Core as a parallel product as opposed to a new one.
Sure it's obvious. Like I said I "get it" it's just core has always been a part of .Net just less directly referred to so my first thought was this was .Net 1.0's core until I started reading through the link.
Overall I think it's a good move just a little confusing (unless I'm the only one) for those who have done .Net work before.
They link to some benchmarks in the article: https://github.com/aspnet/benchmarks (they're a little hard to read on mobile, but I think the tl;dr is "plenty fast" (or to quote the article: "Our lab runs show that ASP.NET Core is faster than some of our industry peers. We see throughput that is 8x better than Node.js and almost 3x better than Go, on the same hardware. We’re also not done! These improvements are from the changes that we were able to get into the 1.0 product.")
Scott Hunter of MS did an interview not so long ago re .NET, Kestrel (the new libuv HTTP server they build), and their march to build up as a top contender for Tech Empower.
They have a team who does it all for fun and has an impressive testing environment build out. Fascinating to listen to given today's new. I listened a few weeks ago myself.
I have friends at work who liked C# and moved on to frontend. The fact they are so less interested in .NET now is amazing to me as I finally want them to teach me to use ... on Linux!? I never thought I would say that. Haha.
My bad. I think I meant that one! There website without JS enabled is terrible these days, so I search around with Google and that was from a FB post I scanned.
Yes, their new site is terrible, I much preferred the simple site they had before that loaded quickly. The new animations, modals and UI buttons are a big downgrade and I'm surprised they went with it considering some of their prior podcasts.
I should not be so judgemental. I noticed they interview what appears to be close friends they have in the Angular community, especially since one of the popular devs/evangelists of Angular, John Papa[0], is a friend of theirs they interviewed.
More elaborate front-end was inevitable. That being said, I love their content and only really interact with their work through an Android podcatcher/audio client.
The webstack is pretty fast, they're working on getting listed officially in the techempower rankings [1]. Here's an older article from Feb explaining more details. [2]
"We used industry benchmarks for web platforms on Linux as part of the release, including the TechEmpower Benchmarks. We’ve been sharing our findings as demonstrated in our own labs, starting several months ago. We’re hoping to see official numbers from TechEmpower soon after our release.
Our lab runs show that ASP.NET Core is faster than some of our industry peers. We see throughput that is 8x better than Node.js and almost 3x better than Go, on the same hardware."
Good news! Now time for all the library and framework authors to add support for .NET Core. I know a lot of people were waiting for RTM before starting this, considering the massive changes between RC1 and RC2.
I've started a support matrix project at https://anclafs.com. Feel free to file an issue or send a PR on GitHub.
Thanks! I have a big list of things to add but I'll check this is on there and add it if they are planning (or have already added) support for Core. It started as things I had talked about in my recent book.
It depends on if you want to run ASP.NET Core on .NET Framework 4.6 or on .NET Core 1.0. It's not just the SDK that is new, Core console apps are pretty recent too, as previously it was mainly for web apps.
You can have a Web UI in desktop app. Just run the app with embedded kestrel web server and make all UI in HTML. And it will look pretty good an all platforms.
That's Electron, .NET Core (?) and Squirrel, all working together. Looks pretty cool, just pity the people on <20 megabit connections - these apps are getting pretty big.
A bit unrelated, but I'm hoping to hear something soon about the future licensing model for the upcoming SQL Server on Linux. Do we get a free version without DB size / CPU / RAM limitations (but perhaps with some other restrictions) that smaller companies and startups can use in production?
I'm planning to run my future ASP.NET MVC projects on Linux and very much would like to know if a better SQL Server Express / Community Edition is coming or do I need to move to PostgreSQL (which I have already started looking into).
"A customer who buys a SQL Server license (per-server or per-core) will be able to use it on Windows Server or Linux. This is just part of how we are enabling greater customer choice and meeting customers where they are."
I've started with the book "PostgreSQL Up & Running". It's rather lacking in detail and has a lot of typos, but I'm nevertheless enjoying it. It's written in a simple language and it provides a good landscape overview, which is exactly what you need in the beginning. Get the big picture first and then you can look for the necessary details as you go.
For a developer favoring .NET, the degree of .NET integration with SQL Server might be a compelling advantage for applications where the limitations of the relevant SQL Server tier weren't a problem.
Because I really like the power of SSDT (SQL Server Data Tools) and how nicely it integrates with Visual Studio and would like to continue using that instrument if possible.
But I can't seriously build a complex product around SQL Server knowing that 10GB is the sky for my database.
You can go ahead with PostreSQL if you use Entity Framework and Code First approach. I used it many times and I was able to quickly jump from PostreSQL to MS SQL and back. It just works. With EF I don't see any reason why I should tie to any particular SQL database
Personally, I prefer writing SQL by hand. That way I know exactly what will happen in a query. A database really is an important component of a system and shouldn't be treated like a dumb data store where you just throw in any stuff you like through an ORM and hope it sticks there somehow.
Lots of people do exactly that (treat it like a dumb data store) and are wildly successful. Do what you want, but don't say other people shouldn't do it a different way which has been proven to work.
Certainly. Yet there's been a tendency later to declare SQL wrong and deprecated and to incite everyone to forget about it and go with an ORM as the only way. I do not accept that. The stronger the ORM zealots are pushing the more repulsive the idea of an ORM becomes to me. But of course it's just another instrument that has its place among the others.
Only among the inexperienced. Skilled people use both as needed, even at the same time. You can keep the complex SQL in a view and map the view with the ORM. ORM's are far too valuable to ever hand write all that SQL and SQL is far too flexible to ever ORM 100% of every use case, the ORM should make up most of the program with hand written SQL sprinkled in where useful when the ORM is clunky.
It's easy enough to do that, you don't HAVE to use an ORM framework. Though mapping sprocs to an Entity model is easy enough... I've had to do that several times where performance in EF was particularly bad.
I would be surprised if there wasn't a free (as in beer) Express version for Linux, with a DB size limit like 10GB. Otherwise Entity Framework Core supports PostgreSQL, but there isn't any lazy loading yet.
I don't believe it works outside of using EF. Last I checked no database drivers existed for .NET Core for MySQL and PostgreSQL. For that reason I've stuck to Django.
I think that probably depends pretty heavily on what your app is. One of my clients has about 200,000 user accounts (and 40K actives/week) in a database that's well under 4GB, and it's been live for three years now. It's in Postgres, but there's nothing that'd stop them from using a 10GB-capped SQL Server Express until a point where having to pay for SQL Server is a good-problem-to-have.
Yeah, the only problem with Samsung is that they most likely going to make builds for the Exynos based SOC which isn't that wide spread.
The ARM ecosystem is quite fragmented and depending on how you build your software it's not as portable as one would assume.
It's also not clear which instruction sets they are going to be aiming for Exynos started with ARMv7 + all the stuff that Samsung added to it, but the newer ships are ARMv8.
The Raspberry Pi still uses ARMv6, 7 and 8 depending on the exact model with the newer RasPi compute, and Zero models still running the old BCM2835 ARMv6 CPU's.
So I'm hoping for a more or less clean stock ARMv7 and ARMv8 builds for the .NET core coming out sooner rather than later, I also hope they'll release the X86 version to more platforms than Windows since running it on something like an Intel Edison which comes with a 32bit ATOM CPU will be quite cool.
.NET is quite powerful and for my taste it's considerably better than Node.JS (I don't like Javascript, this isn't some technical observation just personal preference) but I do like C# and F# quite a bit and having the ability to run the same code across multiple platforms makes me excited for IOT/Embedded devices again especially considering that I don't have to work through some of the headaches that come with Mono (some pun indented).
What is the licensing like if you're a vendor and you want to ship software based on .NET Core on your own hardware appliances? Can you redistribute the runtime, or do you have to pay?
Even if this isn't the final release, I'm confused why they would include version numbers in the package name at all. One of the reasons of having a package manager in the first place is to avoid things like this.
Yeah but so what. If the "best" language always won, why isn't the entire world running on Haskell and Rust or whatever? Java's eco-system is ginormous and it's deeply entrenched as the "corporate stack" in non-MS shops, with more than 15 years of history.
I'm afraid this is just too late to make a big dent. C# is nicer than Java, but not that much nicer to warrant switching over or rewriting your stuff in a very immature eco-system.
Maybe they're hoping to capture some of that elusive start-up market- and mind-share? That one is already pretty hostile towards MS...
Either way, this is a good thing, I just wish they had done this in 2005 or so.
I'm not entirely sure about that.. Node.js has made huge inroads in the 6-7 years it's been around, and Go is pretty popular in some circles as well. Also, look at how RoR grew. It's entirely possible we'll see a shift towards .Net away from Java as an option for lots of projects.
Especially as the tooling for micro-services and docker (or similar) gain traction moving forward. C# is pretty well supported, and at least here in Phoenix is about as common as Java is... can't speak for other metro areas.
I've been a pretty big node.js fan, and haven't used C# as much the last few years, but I wouldn't count it out. Two years ago, C# wouldn't have been on my radar for a new project, now it's entirely possible, given the opportunity for better cross platform deployments.
On top of that, Scala is a much nicer language than C#. I moved from being a C# developer to Scala and couldn't be happier. Best thing about C# is Visual Studio (IDEs are weaker in Scala land) but a bigger ecosystem and more powerful language make it for it imo.
Pattern Matching, Destructing, and ADTs are amazing. Inference is much better and not just for local variables.
for comprehension is the equivalent of LINQ syntax which is nice, very powerful when working with asynchronous calls.
Higher Kinded Types combined with implicit parameters are the two aces in the hole that make it an asbolute definitive win when heads up against C#. When I had to switch back to C# in my last job (some projects were scala, some C#) this was what I missed the omst. Combining the two gives you the 'type class pattern', which is pretty much the first time i've actually seen composition and code sharing just work without the cruft of an 'OO' like framework.
Microsoft's previous stewardship would never have allowed what they're doing now sadly. They must have lost years of progress with their previous strategy of locking things down. Still, better late than never - I love where they're going with .Net now.
Language and tooling wise? Yes, or very close to it.
There's a lot to be said about the huge amount of value in the main maven repository vs. what's in nuget.
Java's decade long head start in that area doesn't seem so huge when you look at what the node.js community has put together in only a few years.
If you were considering .net on a new project, however, I would probably recommend comparing something like Scala. Scala has the benefits of being a "modern" language like C# with the full support of all or nearly all of the existing JVM projects.
That being said, there are a growing number of reasons to choose .net on non-windows platforms.
I don't know of any green threads in .NET -- at least, not built in. However, .NET does have a built-in thread pooling / task execution service. You generally interface via `System.Threading.Tasks.Task`[0], which has static `Run` functions, or use the `Factory` property to get the underlying `TaskFactory` for more fun. Since the executors are pooled, the creation costs will be amortized over your program's lifetime.
Then there's `async`/`await`[1], which is built on top of tasks.
I remember that a long time ago, .NET threading was implemented in such a way that it should, in theory, support Win32 fibers. It's why all the low-level .NET APIs don't use the OS thread IDs, but instead have their own IDs, and require explicit mapping.
I believe that it was abandoned as a supported scenario a while ago, though. But you can still see artifacts of that in the docs, e.g.:
"The value of the ManagedThreadId property does not vary over time, even if unmanaged code that hosts the common language runtime implements the thread as a fiber."
Just had a cursory look at the README. Is each lightweight process mapped to an OS thread? What is the minimal memory used by each lightweight process?
It wraps the F# MailboxProcessor mentioned in a comment above. A thread is only allocated when a message is processed, and then free'd up afterwards. So there's no permanent thread overhead and 100,000s can be created.
The overhead in terms of memory is the internal state of each Process + the user's Process state. To see the internal overhead, check out the member variables for the Actor class [1].
It's pretty lightweight, although could probably shave a few bytes here and there.
I was expecting they would fix the Uri class on Unix (https://github.com/dotnet/corefx/issues/1745) before releasing 1.0 :( Now I fear their backwards compatibility policies will prevent to fix it later. So much for crossplatformness...
They've renamed some packages, so this might possibly be fixed in another one? I found the Ping class didn't work on any non-Windows OS, but it was fixed in a later package with a new name.
Way too many compatibility issues unaddressed by that PR to have a hope of getting anything changed there. Also it looks like what you (?) are trying to do could easily be solved with the Path.* and Directory.* path manipulation functions. No need to put a million apps at risk when you can wrap your call to Uri() with a three line helper.
It's the first version of .NET that's cross-platform, it's the first version of .NET that's totally open source, and it's the first version of .NET that can build self-contained apps (no need to have .NET Framework installed to run .NET Core 1.0 apps). See the release notes[0] for more info.
It's called "core", because it doesn't contain the Windows desktop UI toolkits, like Windows Forms and Windows Presentation Foundation (WPF), and excludes some other Windows-specific functionality.
Additionally, this is the first version of .NET in which apps can be totally self-contained. Previously, you'd need to install the .NET Framework on your machine to run .NET apps. That is no longer a requirement in .NET Core 1.0.
So, it's a pretty huge release. And really the first time in decades that MS has built it's premier dev stack with multiple OS support.
>> "I tried it - it puts 50MB of DLLs beside HelloWorld"
That's a bad test for a framework. It's like saying, "I bought an $80,000 car to drive to my neighbor's house across the street - I could have just walked!"
A good test is, "What is the full install size of a real app built on .NET Core?"
And the answer is, not much larger than any other app. All that code has to live somewhere, either in your app or in your libraries.
Mono supports COM interop on Linux, too. At its most basic level, COM is really just an ABI, and there's nothing Windows-specific about it (see also: XPCOM).
.NET Core is the new modular, open source rebuild of the .NET Framework. It's a lot of big changes, enough so that they decided to call it a 1.0 to acknowledge that A) it's a big new effort, but also B) it's not entirely equivalent to the full, previous .NET Framework it is intended to eventually replace.
Its the first non-preview release of the product called ".NET Core"; it is, therefore, versioned as ".NET Core 1.0".
This is a different (but related) product to the .NET Framework (current version, 4.6) -- future versions of the .NET Framework will be built on top of .NET Core, which will be updated more frequently than the larger Framework. So, maybe .NET Framework 5.0 will be built on something like .NET Core 1.3 -- or 2.0, whatever.
The full .NET framework (Windows only) will live on, currently at 4.6.2, I believe. ".NET Core" is the new open source, cross-platform subset of the full .NET framework.
IMHO the full .NET is from today considered a legacy platform. I would compare the situation to WebForms / MVC - who does new projects with WebForms now?
Because it's the first version of the xplat .NET (the .NET Core) supported by Microsoft (mono already does an amazing works as xplat, and have done that for years).
It's not confusing if you work in .NET, the .NET Framework 1.0 was released in 2000..
I have a Slackware server, I have tried a few times to get started with .net core but I've given up every time because it's just so unclear what I have to do and where to find everything. Maybe if I have Debian or Fedora it's easy, but I don't, and I just cannot seem to find a simple article that says: here download this, run that, and you're good to go. So I guess I'll try again in a few months.
At my shop we have been using ASP Core since RC1 and have been quite happy with it. The tooling feels a lot like Node.js in a way, much different from any previous .NET development. It's awesome to have C# be deployable as docker containers on Linux hosts.
My biggest gripe so far has been that Nuget doesn't have a way to search only for libraries that are Core compatible.
If you are San Diego area and interested in working on Core, we are hiring. Inquire at christian.peth(at)millenniumhealth.com
Is there any advantage to running a reverse proxy (nginx) if you already have Cloudfront in front of your cluster, doing the https & gzip work? And, if your web app serves literally 0 static content?
It does not; it uses the same JIT and GC as the Windows version of .NET Core, and the code is also shared with the (full) .NET Framework that ships inside Windows.
Actually, this has always been the case for .NET Core, but you may be thinking of our older preview versions which gave you the option to run your app entirely on mono.
What happened to the so called promised "dynamic compilation"? There was supposed to be a great improvement in compilation times...which is still quite long on the default sample.
I don't feel like Microsoft has improved enough for me to make the switch from .NET 4.x.
As someone who's mostly moved off of windows, the ability to build/target docker (linux) and dev on osx (already using vs code for node) is pretty compelling.
I'm sure their MVC stack is fine, but this isn't a problem needing a solution in 2016. There are dozens are frameworks in your favorite language that can deal with taking HTTP requests, grabbing data from a backend, and spitting it out as JSON. They all run fine on Linux servers, are easy to develop on Mac.
C# is a nice language too, but I'd need a compelling reason to jump from Java / Scala over to .Net. And JetBrains puts out products just as good as Visual Studio for every major language.
The one thing that might make me consider this would be MS implementing the actual valuable part of .Net to Linux, Mac, and others: the desktop GUI libs and Visual Studio GUI tools.
Of course this will never happen since the MS dev tools team is beholden to the Office and OS departments.
Instead, I imagine .Net core will not gain much traction.
Any tutorial on how to build a .Net core web application specifically on Linux. There seems to be a lot to wade through docs wise with a lot of it still talking Windows.
First line of the blog post: We are excited to announce the release of .NET Core 1.0, ASP.NET Core 1.0 and Entity Framework 1.0, available on Windows, OS X and Linux!
Presumably the .NET announcements were pre-scheduled, so if there is any connection between these posts, it must go the other way. But I don't believe there is. On the contrary, coincidence is almost certainly what we're seeing here, and in that case it is probably only the strength of your feeling about this topic that makes you see it otherwise.
.NET Core Tools Telemetry
The .NET Core tools include a telemetry feature so that we can collect usage information about the .NET Core Tools. It’s important that we understand how the tools are being used so that we can improve them. Part of the reason the tools are in Preview is that we don’t have enough information on the way that they will be used. The telemetry is only in the tools and does not affect your app.
Behavior
The telemetry feature is on by default. The data collected is anonymous in nature and will be published in an aggregated form for use by both Microsoft and community engineers under a Creative Commons license.
You can opt-out of the telemetry feature by setting an environment variable DOTNET_CLI_TELEMETRY_OPTOUT (e.g. export on OS X/Linux, set on Windows) to true (e.g. “true”, 1). Doing this will stop the collection process from running.
Data Points
The feature collects the following pieces of data:
The command being used (e.g. “build”, “restore”)
The ExitCode of the command
For test projects, the test runner being used
The timestamp of invocation
The framework used
Whether runtime IDs are present in the “runtimes” node
The CLI version being used
The feature will not collect any personal data, such as usernames or emails. It will not scan your code and not extract any project-level data that can be considered sensitive, such as name, repo or author (if you set those in your project.json). We want to know how the tools are used, not what you are using the tools to build. If you find sensitive data being collected, that’s a bug. Please file an issue and it will be fixed.