
.NET Core RC2 – Improvements, Schedule, and Roadmap - choudeshell
https://blogs.msdn.microsoft.com/dotnet/2016/05/06/net-core-rc2-improvements-schedule-and-roadmap/
======
Eyas
So happy .NET Core was delayed-- a very cool idea, but trying to write a
library that targeted a few platforms was an absolute nightmare. Just knowing
the difference between dnxcore50, netcore50, netcoreapp was confusing. Need
ASP.Net to run unit tests was strange when I was targeting Web at all, and the
dotnet CLI hadn't existed yet.

The standardization around the .NET Standard seems great. dotnet as a CLI is
great. Kudos to the team for having the humility of pushing back deadlines and
actually delivering an end-to-end solution that actually improves .NET
development.

~~~
jsingleton
I feel your pain. I'm trying to get Scientist.NET working and it's far from
easy on RC1. The issue is linked from my list of ASP.NET Core Library and
Framework Support ([https://anclafs.com](https://anclafs.com)).

------
jmkni
Exciting stuff!

I'm going to be sticking with 4.6 for the foreseeable future for serious stuff
because I'm a big fan of ServiceStack, and Web API feels like a step
backwards, even in .net core.

Even for 4.6, the development of .net core has still resulted in some nice
gifts, such as the task runner explorer in Visual Studio for Gulp/Grunt, and
the auto installing of NPM and Bower packages when a project.json or
bower.json package is detected.

For small/side projects though, .net core it is!

If Microsoft is still on a spending spree, they should really purchase
ServiceStack and use it to make some serious improvements to Web API.

~~~
tucaz
As a fan of .NET Web API and having not worked with ServiceStack may I ask
what I am losing by not embracing ServiceStack?

~~~
jmkni
I'll do my best to explain!

I feel as though ServiceStack was built from the ground up to be a framework
for building a Restful API, whereas Web API feels like more of an afterthought
to MVC.

If you follow the best practices, you end up with a well structured API from a
programming perspective. -
[http://stackoverflow.com/a/15235822/969613](http://stackoverflow.com/a/15235822/969613)

It's opinionated, and a result you have a clearly defined place for your
services, your request/response DTO's, your logic etc. It's all very loosely
coupled and testable which I like.

In contrast, Web API doesn't make it clear how to split up your MVC
controllers and your API controllers. You are kind of left to your own
devices, and I find that every developer comes up with their own system, and
every Web API project is structured differently. With Web API, you also tend
to keep all of your API controllers, models etc in the same project, which I
think can become more difficult to maintain over time, and is also harder to
write tests for.

For dependency injection, it comes out of the box with Funq, which I find to
be really capable, but you can use any other DI framework, and it's simple to
swap out Funq for your preffered DI framework.

For ORM, ServiceStack provides OrmLite.net, which I find to be fit for purpose
most of the time. If you need the more powerful Entity Framework, that's easy
to swap out too, but I find OrmLite to be fast, and easy to work with.

For caching, you can easily pull in Redis, and use it instead of ram for
storing user sessions.

There's a bunch of authentication providers, you can add authentication for
Google/Facebook/Linkedin/Github etc etc with one line of code, then just the
configuration with the third party. -
[https://github.com/ServiceStack/ServiceStack/wiki/Authentica...](https://github.com/ServiceStack/ServiceStack/wiki/Authentication-
and-authorization)

One thing I really like is validators. I know in Web API you have attribute
validation on models, but validators in ServiceStack give you a lot more
control. They allow you to intercept an API request before you even hit the
service, and apply any logic/rules you need to.

Swagger is easy to configure too, basically one line of code and then drag and
drop the Swagger HTML/CSS/JS, and it works. You can also document swagger
using attributes on your DTO's in C#.

It has a built in Mapping library, a built in JSON parsing library (which is
faster than Json.net), and logging is easy to do as well.

The developers are really good too. Literally any problem you come across, you
will find a well thought out StackOverflow answer by one of the main
developers, usually Demis (mythz).

I'm probably not explaining this very well! Maybe somebody more articulate
than myself will come along and provide a better answer, but I recommend
checking it out. It's massive (they call it ServiceStack because it is
literally a stack of services) and when I need to use Web API, I just find it
extremely lacking in comparison.

Hope this helped!

~~~
Scarbutt
For building a HTTP API why would you tie yourself to a closed source non-free
framework when there are so many good alternatives out there? besides dealing
with the servicestack licensing you now have to deal with all the windows
licensing too.

I mean, I doubt it has some secret sauce that makes it better over everything
else for building a REST API.

~~~
layoric
ServiceStack isn't closed source, framework, IDE integrations, client
libraries all on GitHub.

[https://github.com/ServiceStack](https://github.com/ServiceStack)

Licensing means there are dedicate developer resources working on regular
improvements all the time. Disclaimer, I am one of those resources :).

Regarding 'secret sauce', it's not a secret at all, as others have said it has
a big focus on simple message based approach. Recent SO question was asked
sums it up.

[http://stackoverflow.com/questions/36962263/can-i-use-
servic...](http://stackoverflow.com/questions/36962263/can-i-use-servicestack-
routes-with-method-parameters-instead-of-a-dto-class-for/36962502#36962502)

Edit: grammar

------
Renner1
I am concerned that .NET Core has put .NET on a very dark path.

It feels like the developers have built this from a theoretical programming
paradise point of view: Everything's "just" a handler in a pipeline. All
dependencies are now "micro". Nothing has a hard dependency on anything else.

The result of all this is a framework that might be beautiful at it's core,
but becomes gradually more prickly as you get closer to the development
surface. It feels like their answer to usability is "Oh, we'll just throw
together a metapackage for that." It scares me that they think they can wrap
up all the complexity they've created in a way that won't lead to the
developer having to dig through the .NET source to make stuff work.

It's worrisome to see how brittle the system is at this late stage. It shows
that this has been built from the inside-out rather than outside-in with a
focus on usability. All the ugly stuff that should be buried inside the
framework has been pushed out to the edge of the system for the developer to
deal with. Watch any of the recent ASP.NET streams to see numerous
illustrations of this whenever they open up a Core project.

ASP.NET used to be a reprieve from all the hideousness of other tools like NPM
and Node. Now I'm seeing all the same problems in ASP.NET. Have you tried
deploying a DNX / Core app? Good luck if any file path is longer than 255
characters. Tried pulling in the dependencies from a project.json behind a
corporate firewall? Good luck. It might work half the time, but it needs to
work 100% of the time to be practical.

All these tiny metapackages will lead to dozens of projects all in
intermediate undefined states whereas in the past we had a clear distinction -
that repository from 2010? It's MVC3. Ok, we know how to upgrade that because
the process is well-defined. I expect ASP.NET core will be more a case of
deleting out all your semver numbers from project.json, executing the update
command and holding your breath. I dread this the way I dread opening any
Javascript/NPM project that's more than a year old.

Given the rise of Javascript clients with REST servers I question whether it's
even worth sticking with Core when other languages can do the same thing with
fewer pain points. I really want to see Core succeed but I'm just not sure
about it given what we've seen so far.

~~~
pbz
I've used Core RC1 for a few months now and I haven't seen any of the issues
you're worried about. I've had way more pain with TypeScript, VS, and node
integration than anything .NET specific. From my point of view the .net piece
just works, even with the RC bits.

It's just a bit more granular than before, but VS helps you a lot. All you
have to do is type a class name and VS can suggest to import a library that
you never included in your project. This alone, in my opinion, makes it a lot
easier than before - I wouldn't want to go back.

~~~
Renner1
The fact that one person hasn't experienced these issues doesn't invalidate my
argument. I am sure some people will have no issues, but there will be some
percentage of people who do have issues and I suspect that percentage will be
quite large. I also suspect a lot of people who have not encountered issues
aren't actually building software for an employer. As someone else mentioned,
a lot of the more vocal people tend to be unemployed hobbyists who have no
deadlines or who don't venture off the beaten path because they don't have any
requirements.

~~~
pbz
You seem to suspect a lot without bringing any evidence; nothing but FUD. I
can just as easily suspect that you're full of it.

~~~
garganzol
I'm with Rener on this. He mentioned important areas where .NET Core falls
apart. Would like to see it changing, but as for now almost nobody is going to
use .NET Core for anything serious. .NET Core is selfishly designed for .NET
Core developers, not for .NET Core customers.

~~~
cmdkeen
OK to address those: The micro packages are Microsoft packages which means
they'll be testing them thoroughly and independencies between them will be
recorded via NuGet. The whole "delete numbers and update" bit makes no sense
given the NuGet update command.

The point of current ASP.Net Core development is that they are being open
about the sausage making process in order to avoid massive breaking changes in
the future. Opening an MVC3 project has a defined update process because 3->6
has a massive update process. The team is aiming to only add new functionality
going forward (at least for a very long time), they don't want to have to
release Core 2.0 but 1.18. Things may break at the moment but that is almost
certainly because you're hooked up to the MS CI build MyGet repository,
certainly that is what the ASP Live streams are doing, so it is hardly
surprising that things don't always work. They've just committed to 6+ weeks
of fixing stuff before the RTM - stability will improve.

------
lllr_finger
As a developer working on a site with thousands of concurrent connections that
recently went to production with RC1 and is implementing back-end applications
as well, I've been looking forward to RC2. All this talk that .NET Core will
never work for X or Y reasons is amusing - it works right now on large
sites/projects, and has been a refreshing change for our team(s) with a
minimal amount of headaches.

I would suggest trying it before knocking it at a theoretical level,
especially the MVC Core stuff that combines 4.X's MVC and Web API and enhances
it.

------
swalsh
I've been using .NET core in "production" (i get less than a thousand users a
day... so not heavy production) for almost a month now. I love the platform,
and I love how it keeps getting better. Glad to see this timeline, and i'm
really glad to see Xamarin a part of this.

------
taspeotis
Release Candidate 1 really should not have shipped.

I had a prototype product using DNX "Release Candidate" and then DNX was
killed.

It was a prototype and nothing of real value was lost. I'm not holding a
grudge, and I'm sure the people at Microsoft would have avoided it if they had
a crystal ball.

Personally I love .NET, find C# extremely enjoyable and look forward to RC2
but the way RC1 ended left a sour taste in my mouth.

------
rogihee
I have been using MVC since version 2, but I will definitely will not use Core
for my first project. There are so many breaking changes that I cannot imagine
a 1.0 will cut it.

Also .Net Core is a clusterfuck built from an architectonaut ivory tower, just
look at all the github issues around datatables. Yes, they are perhaps old and
outdated but there is a million production items relying on it. DevExpress,
open-source Excel serializers, etc etc. After a decade in the framework they
have earned a spot, and outright refusing them to include does NOT help you
gain traction. Because it contained anti-patterns du jour or something,
whatever. I have a ton of production code relying on it, that is and has been
serving me and my customers very well the past 10 years.

Also, the datatables discussion raised issues about the database schema. The
proposal from a softie with a 5-min stab at a generic albeit typed system for
covering all use cases surrounding the data and column types in a databases,
is just mind boggling naive. What is the average age of the people doing .net
Core?

~~~
sitharus
People relying on features that aren't in .NET Core can keep using full .NET,
and that's the intended result. Full .NET is going to be supported for the
foreseeable future.

However for those projects that don't need them .NET Core is an interesting
alternative especially with cloud deployment to Linux or a BSD host.

I work at a large .NET based company whose core product could not work on
Core, but we certainly are considering it for newer projects. I'm also many
years past being a junior developer.

~~~
rogihee
Well, it's fine if features are ported later or in seperate packages or in
some other/better format, but what I understood around System.Data is that
vital pieces are missing, without a clear vision on how to deal with these.
And if they are not added before 1.0 it may have far-reaching consequences.
See my comment below for references.

~~~
cmdkeen
The data story in general for Core is something they have repeatedly said is
going to be worked on incrementally. Entity Framework Core is explicitly being
sold as a "use only if you have to run on Linux/Mac" otherwise stick with EF6.

They're also making it very clear that features can and will be added/ported
to .Net Core in the future based on what developers need and want. The thing
they are absolutely keen to avoid is any breaking changes from 1.0 onwards -
the plan is that you'll see 1.1, 1.2, 1.3 not a semver 2.0 for a very long
time. To that end the legacy of DataTables is "how can we do this better" not
trying to support decades of old libraries from the odd. The 4.6 Framework
exists to do that and isn't going away if that is what you need.

------
tacos
Did you intentionally pick the one thing I didn't say was broken? Friggin'
writev() still doesn't work, Scott! That breaks about 20,000 tools, often
subtly. The fix somehow didn't make it into the latest insider build.

And I can't tell you how much I look forward to updating every Windows
component then rebooting twice so I can _maybe_ run cmake in a console window.
Geeze, Scott, you've lost the rabbit.

~~~
dang
You can't be personally uncivil like this on HN. We ban users who do it
repeatedly, so please stop doing it.

We detached this comment from
[https://news.ycombinator.com/item?id=11701610](https://news.ycombinator.com/item?id=11701610)
and marked it off-topic.

------
garganzol
I would share an unpopular opinion: .NET Core stuff is a going to be a train
wreck (especially ASP.NET Core). The main reason for this is a development
anarchy and "all and nothing" style of problem solving.

Just an example: ASP.NET Core supports so many host servers and runtimes so
that nobody would tell you what it really supports. A little bit of this, a
little bit of that, and nothing really works at the end. This is huge contrast
to good old ASP.NET (Classic) which just works on IIS. Want a website? Cool,
IIS + ASP.NET + Nginx/HAproxy and you are golden.

Another example: ASP.NET Core has a WordPress-style request processing
pipeline which is called middleware. The problem is: every module gets every
request and there is no way to lazy route them based on some criteria like "
_.gif " is handled by that module, "gen/_.jpg" by another one. Ask people who
use WordPress and install a lot of plugins. They will say you that it becomes
sluggish as turtle. Why? Because every plugin handles _every_ request even
when it does not relate to the given plugin at all.

~~~
rjbwork
>every module gets every request and there is no way to lazy route them based
on some criteria like ".gif" is handled by that module, "gen/.jpg" by another
one.

Simply false. OWIN Middlewares can determine whether to pass execution to the
next module in the pipeline or not. Simply call Task.FromResult(0) rather than
Next.Invoke(context);

[http://stackoverflow.com/questions/18965809/owin-stop-
proces...](http://stackoverflow.com/questions/18965809/owin-stop-processing-
based-on-contition)

Some of your other criticisms may be true, but I figure that if Java can be so
ubiquitous across platforms and servers, .NET core can manage it.

~~~
garganzol
Here is the code for middleware init:

appBuilder.Map("/something/something", doit => { doit.Use<Pipepart1>();
doit.Use<Pipepart2>(); });

See that doit.Use<Pipepart2>()? Such construct implies that Pipepart2 should
be compiled and loaded right now. This is simply not scalable. What I really
want is Pipepart2 being loaded on demand only when the corresponding request
is encountered.

~~~
rjbwork
Hmm. You seem to be talking about acouple of separate things - lazy
instantiation of objects, lazy loading of modules/assemblies into the
AppDomain, and early termination of OWIN Middleware modules.

You attach the middleware to the appBuilder via Map (or other methods) during
application startup, so it will simply incur the assembly penalty at startup.
An assembly is loaded the first time it is referenced in executing code in an
AppDomain. This is fairly unavoidable in the .NET world, though I agree that
it would be preferable if it were otherwise.

Secondly, that use thing is in fact executed during the map - it does all the
configuration and setup that it needs to do on startup, and is ready to
process requests that are handed to it. Internally it actually gets set up on
an entirely new AppBuilder that the main one delegates to I believe, so it
only gets invoked on a correct path match, not on every request - only the
matching logic is invoked if the request makes it to that point in the
pipeline.

The early termination was addressed in my previous post, of course.

~~~
garganzol
Yes, assembly loading is exactly what I'm talking about.

>An assembly is loaded the first time it is referenced in executing code in an
AppDomain. This is fairly unavoidable in the .NET world

Premature assembly loading was easily avoidable with Web.config where you
specified the request filter and assembly qualified type name of a handler.
Hope it will be covered in ASP.NET Core / OWIN someday.

Yes, you are right about the map: it allows you to select who handles what.
Still Web.config stays a much better contender in that deep matter: it allows
to attach modules without changing code. A common scenario like adding a
custom auth module to existing web application is a breeze with Web.config.

Code-based map looks a bit clunky after experiencing the gifts of Web.config
flexibility and efficiency.

~~~
paulirwin
The exact problem with Web.config modules is that they are never handled in
your code, only in the config. It also is very IIS-specific, precluding it
from working easily with other web servers cross-platform. With OWIN, by
starting with code, there is nothing stopping you or someone else from making
a config-file-driven middleware loader to accomplish the functionality you're
looking for. i.e. app.LoadMiddlewareFromConfig("middleware.json")

~~~
garganzol
The only thing that precludes Web.config from working cross-platform is the
absence of reliable implementation of a cross-platform web server. For
example, I don't even consider Nginx reliable on Windows; some nasty effects
are in place. The best home for Nginx is Linux. The best home for IIS is
Windows. Cross-platform sounds sweet in theory but in reality it goes down
hill pretty fast with subtle defects.

~~~
Skinney
What's stopping you from using IIS on Windows and Nginx on Linux? It's not
like you're committing to one server for one application.

------
tacos
Like early OneDrive and Windows 8, I really hope a couple people got fired
over this debacle. But like today's OneDrive, Windows 10 and .NET Core, I'm
glad it's finally moving forward. I know HN has an anti-Microsoft bias from pg
on down, but this is serious, proven tech that's moving in the right
direction. Say what you want about MS, but their R&D budget is larger and more
focused on developers than any other company, decade after decade.

~~~
shanselman
Wait, what debacle? Why do you want us fired?

~~~
tacos
I owe you a long, private email (I'm a fan of your work.. and an ex Microsoft
dev manager) but the way this was handled was not good. A RC that wasn't,
public fumbling, throwing away schedules, and you can't even come up with a
sane name for the damn thing. Too early, too weird, and poorly managed. For
such an amazing, important technology that's such a massive leap forward in
size and interoperability!

It smelled like OneDrive. And I'll provide this quote to demonstrate the type
of insanity that leaks out of my beloved Redmond on occasion...

"Prior to Windows 8.1, we had two sync experiences. One used on Windows
7/8/Mac to connect to the consumer service, and a second sync engine to
connect to the commercial service (OneDrive for Business). In Windows 8.1 we
introduced a third sync engine..."

Current beef is the BashOnWindows alpha. Why was this pushed out so early?
Quite literally nothing works on it. The forward progress even in the past few
weeks is impressive but... it's bad. Ubuntu Trusty was a weird release to
begin with and now it's in a weird time in its lifecycle where it's very
difficult to get modern versions of gcc, clang, python 3.5 or even ffmpeg. Not
that you could run cmake anyway. But meanwhile you shipped bits where "apt-get
update" itself failed right out of the box. I don't get it.

EDIT: HN is limiting my ability to reply to you Scott but my frustration with
Bash on Windows is both. The quality of the early bits is poor and I find the
timing weird. Likewise Trusty itself is at a maximally frustrating stage of
its lifecycle for anyone new jumping in (Stack Overflow is already filling up
with newbie Bash/Ubuntu questions that are often tied to Windows bugs. And
geeze, just let the insanity of that sentence sink in.) From a technical
perspective I don't love the alpha. From a dev/engineering/coder perspective I
don't love Trusty. From a strategic perspective, just like .NET Core, I'm
wondering WTF is going on, thus the rant.

~~~
shanselman
Maybe we'll have to agree to disagree. The public asked for open and we gave
them Open with a pretty capital O. When this project started we didn't know we
were buying Xamarin, so that required a pivot. Yes it looked messy because it
was messy. It's hard to be Open AND Organized. Node is a mess, remember io.js?
Software Development _is_ messy and this was a peek into the kitchen.

I can't speak to OneDrive, but it's clearly a problem. Unfortunately, I work
in DevDiv, not Windows.

As far as Bash is concerned, I think "literally nothing works" is not fair. I
helped with this release. I presented on it for 90 minutes this morning and
installed and brought in build-essentials, worked on redis-server, g++, ruby,
and it worked fine. Yes there's rough areas, but we can update it often with
WU. Also, you're complaining about Trusty in the same breath as Bash on
Windows. You'll be able to update to 16.04 later so that might help. And,
you're certainly able to Hyper-V any Linux and SSH in as well.

~~~
tacos
I'll give a polite tip of the hat but warn that cutesy redis demos are
starting to wear thin. The C++ support in the tools you get after installing
build-essentials on Trusty smells like Visual Studio 2010.

As for .NET core, you'll get plenty of warm fuzzy "just happy to be part of
the journey" crap here but don't forget that for every unemployed enthusiast
chatting in an issues thread on github there are 100 professionals working
their asses off trying to ship solutions. That's why you exist. Don't lose
that.

"Things may not work perfectly, but that is why it's no production code." Ugh.
They called it a Release Candidate! And it was... crap.

Tools support isn't like horseshoes, almost doesn't count. Likewise you need
breadth and depth with your Linux support or this is going to be a total
friggin' debacle for devs.

~~~
voltagex_
What demos do you want? If redis is "cutesy", what isn't?

~~~
tacos
Something hip like one of the neural toolkits? I guess that would be hard to
do since you can't install CUDA, R packages or the JDK at the moment. Maybe
run Docker? Nope. Heck, I'd settle for being able to run tar or rar. Those
don't work either. Cmake? Nope, broken. Valgrind? nope. Mono? nope.

He's welcome to run the same redis demo he does every tradeshow and pretend
everything's hot and ready for action, but it's misleading at best.

~~~
bitcrazed
We're focusing on mainstream developer scenarios to start with (esp. Ruby,
Java, Python, etc); we'll get to more advanced, esoteric, and exotic
technologies later.

FWIW, many core Linux tools (e.g. tar, gzip/gunzip) work* well, and tools like
gcc/g++, Mono and CMake work* well in current insiders builds.

* By "work", we mean, they work in our scenario testing. If you find issues, please log bugs at [https://aka.ms/winbashgithub](https://aka.ms/winbashgithub).

~~~
tacos
I'm running the latest insider build. The bugs reported are based on personal
experience and I verified they were currently open on GitHub as well before I
posted. Tar hangs, cmake can't find a compiler, Valgrind goes nuts, and mono
doesn't run. This is right at the top of the current issue list on GitHub with
Microsoft annotations confirming the bugs.

I do what I can to report issues but frankly I do pretty basic development and
hit an immediate brick wall with this stuff. I can't believe I'm celebrating
Cygwin both for stability and breadth of packages. It's nuts.

I don't expect you guys to demo tensorflow. I'm simply saying that getting
Redis limping borders on false prophecy.

~~~
shanselman
"He's welcome to run the same redis demo he does every tradeshow and pretend
everything's hot and ready for action, but it's misleading at best."

Here, I just built TensorFlow on my Surface while sitting here in an airport.
Here's a screenshot:
[http://i.imgur.com/WlNNuVt.png](http://i.imgur.com/WlNNuVt.png)

I'll try some more complex TensorFlow examples on the plane.

I'm sorry you're having (or had) issues with the build on your machine, but
your negativity is kind of a bummer. We're happy to help chase down filed
bugs.

~~~
tacos
writev() still doesn't work, Scott. That breaks countless tools, often subtly.
The fix somehow didn't make it into the latest insider build.

My solution? Wait for Windows update to change every Windows component, in
hopes that I can then _maybe_ run cmake in a console window.

Like I said, I appreciate the recent progress, but this is exactly the type of
goofy situation you used to jump up and down about years ago.

~~~
shanselman
Who are you?

