Every release since 2008 has been getting slower and slower for C++, the C compiler is awful, PGO instrumentation in Win8 is less capable, and there's no equivalents in the Windows ecosystem to gcc's likely/unlikely, oprofile, or valgrind.
It's at the point now where developing on Windows vs Linux is a serious performance and security impediment due to their withering native toolchain. But I guess nobody over there gets promoted for fixing the hard stuff.
RE: likely/unlikely, VS has __assume(0), which isn't exactly the same thing I know, but it is something and does help. I'm actually in favor of us doing more with static annotations to bring PGO style optimizations to non-PGO builds. If you feel the same way please be louder about it, but realize there is a vocal group of people who consider static annotations harmful (and they have a large body of evidence in __forceinline backing them up).
oprofile: There is ETW/xperf, and of course a variety of instrumented profilers (both shipping and internal)
Although I do wish my team was larger, and it doesn't get all the love that some of the more flashing UI stuff does, I wouldn't go as far as to say the toolchain is withering. Some of the smartest people I know are working on my team with me on these problems.
Here's some feedback for low latency development in Visual Studio. Try not to take it personally, I don't hate you, I just hate every MSVC I've ever used:
- With a profiler, we typically see only a few hundred samples in our simulation runs, the rest, between 99.999% and 99.99999% of the samples are in WaitForSingleObject. PGO compiles only 0.4% of our application for speed, and our response times are about 20 usecs slower with it on.
- RE xperf (and WinDbg): Stop bundling this shit in "Toolkits". The installers/downloaders are buggy as fuck and break the main VS2012 installer; I don't want to run a bunch of random msi files on our prod server core box; and the download pages are a maze of redirects.
- __assume is so useless. How often does someone write a branch that does nothing every time? We need manual size/speed optimization.
- PogoAutoSweep crashes threaded programs if you don't suspend every other thread but it's still quasi documented. The PogoSafeMode build flag/environment appears to be ignored.
- The filename postfix that PogoAutoSweep adds breaks the VS2012 PGO menu options.
- The VS2012 PGO instrumented/optimized menu items overwrite the target exe. So when you realize somethings wrong in the environment or click something by mistake and didn't manually reshuffle the build dir, you have to rebuild everything. Name the target .instrument.exe or put it in another or something please.
- There's nothing one can do to limit the VS2012 profiler to specific threads. I was able to write hooks to target threads in VerySleepy in an afternoon, but somehow this feature escapes MS.
- The interface for instrumenting specific functions is terrible, use a plain text file or decl_spec FFS.
- If there are #defines or other ways to detect an instrumented build, they're terribly documented.
- PGO instrumentation/optimization is woefully obtuse. What did it pick for speed? Why did it pick it? What branches did it fold/unfold? How does the pgc weighting actually work? Can I artificially create my own pgc?
A perl script that compares the offsets in objdump will give me more information than most of the MSDN articles about this shit.
- Not related to our main response loop, but we can see in our logging threads that the LFH malloc appears to often call RtlAnsiStringToUnicodestring. Seriously, what the fuck?
- Speaking of which, changing the malloc implementation is still horrible even after the VS2010 msvcrt changes. In linux, you can change LD_PRELOAD and try out tcmalloc or the Intel tbb allocator in about 3 minutes. In Visual Studio, prepare to spend a few hours getting a reasonably large project to build with these.
- Why is there SemaphoreSlim in C# but not C++? Why is there no Benaphore primitive that can also be used in WaitForMultipleObjects?
- Serious issues in Microsoft Developer Connect are often ignored, closed as behaves as expected, or dismissed off hand. For example, I was tearing my hair out over this one, and the resolution is truly outrageous: http://connect.microsoft.com/VisualStudio/feedback/details/7...
I am certain that this comment on Hacker News will make a bigger impact than anything I've ever seen on Microsoft Connect.
- RE Instrumenting profilers/bounds checkers: Any project that is reasonably large and has multiple configs/3rd party libraries is bad enough to manage in vanilla Visual Studio that instrumenting it with some other 3rd party plugin becomes a serious time sink.
- There is still no valgrind/cachegrind equivalent that provides the same level of detail. The closest thing is either Intel Pin or Rational Purify/Quantify and they are expensive and poor substitutes. Microsoft is the only company that can see and modify the source of the kernel, runtime, linker, and machine code generation, so I don't know who else they expect to write this for them.
- Our statically linked application takes 20 minutes link and the link is not parallel. C++ compiles are likewise brutally slow. We resort to developing in VS2008 and compiling release stuff in VS2012. And no, I'm not going to turn on precompiled headers, MSVC builds incorrect binaries about 5% of the time as it is.
- Concerning precompiled headers, sharing a single pch file across projects or strictly controlling a single vcproj/vcxproj with the compiled unit is de facto impossible.
This person works on PGO and code generation and wanted feedback. Everything I said related to PGO, instrumentation, or parts of the CRT that relate to that (and one about their broken feedback system).
But yes, thick skin is required for those who self identify!
Then again I have always held the view that if my users are unhappy, it is a personal failing on the part of my team and myself. (Although I am low enough on the software engineering totem poll that I can't really do anything outside of ensure components I create are as user friendly and high quality as possible!)
This basically means, if you are using (say) Autodesk products, you have to compile all your plugins with exactly the same version the main product (exe) was compiled.
This is absolute mess. Talk about 7 different MSVCRxx.DLL and MSVCPxx.DLL in one process.
Why can't you think of scheme where backward compatibility works for the CRT/C++RT too?
Oh, and on naming things - please stick with VS2012 -> CRT2012 if possible.
Also why ShortName / PlatformName in VisualStudio is named so crazy - one amd64/x64 vs Win32/x86 - and then SDK's coming from Microsoft would place in the lib/ folder things sometimes with lib/$(ShortName) and sometimes with lib/$(PlatformName).
I know why - because these things are not important. Just hard-code it in your .vcxproj and live a happier life :) Automation begone!
This is a complex issue, but consider abandoning PGO and just compiling for for speed then. PGO doesn't help in each and every case.
> RE xperf (and WinDbg): Stop bundling this shit in "Toolkits".
How else would you bundle it? It's not simple to put something as part of the base OS image. And I haven't heard about these installers breaking the VS installer - that sounds like a bad bug.
> __assume is so useless. How often does someone write a branch that does nothing every time?
It's commonly used as a retail version of a debug ASSERT macro. But yes, like I said earlier - I wish we would do more with static annotations, but I've gotten push back.
> PogoAutoSweep crashes threaded programs if you don't suspend every other thread but it's still quasi documented. The PogoSafeMode build flag/environment appears to be ignored.
I've never seen PogoAutoSweep crash - do you have a repro? PogoSafeMode doesn't affect PogoAutoSweep, only probe generation.
> The filename postfix that PogoAutoSweep adds breaks the VS2012 PGO menu options.
Haven't heard of this either, but stay tuned. I don't like the PGO menu options as they currently stand.
> There's nothing one can do to limit the VS2012 profiler to specific threads.
I can forward that request to the profiler team.
> The interface for instrumenting specific functions is terrible, use a plain text file or decl_spec FFS.
Are you talking about PGI or an instrumented profiler?
> If there are #defines or other ways to detect an instrumented build, they're terribly documented.
There isn't an easy way, and having different code in the PGI build versus the PGU build would be problematic.
> PGO instrumentation/optimization is woefully obtuse. What did it pick for speed? Why did it pick it? What branches did it fold/unfold?
> How does the pgc weighting actually work?
The obvious way, the counts are multiplied by the provided factor before being merged in the PGD.
> Can I artificially create my own pgc?
> Not related to our main response loop, but we can see in our logging threads that the LFH malloc appears to often call RtlAnsiStringToUnicodestring.
No idea (CRT owns malloc, Windows owns LFH).
> Speaking of which, changing the malloc implementation is still horrible even after the VS2010 msvcrt changes. In linux...
I'm not an expert, but my understanding was that malloc and friends were weak symbols, and if you just linked in an obj that defined malloc it would be selected as the "real" malloc without giving an ODR.
> Why is there SemaphoreSlim in C# but not C++? Why is there no Benaphore primitive that can also be used in WaitForMultipleObjects?
I'm not sure, Windows owns this.
> Serious issues in Microsoft Developer Connect are often ignored, closed as behaves as expected, or dismissed off hand
I've heard complaints about MSConnect before as well. All I can say is that it is the correct place to file bugs; and the issues there do directly show up in our bug list (someone goes through connect issues, filters/combines them, and files bugs).
> There is still no valgrind/cachegrind equivalent that provides the same level of detail
That is correct. Sorry.
> Our statically linked application takes 20 minutes link and the link is not parallel. C++ compiles are likewise brutally slow
Link.exe performance is at the top of our minds right now, you're not the only one to bring it up. VS 2013 will have some perf improvements across the FE (to help with C++ being brutally slow) but there is always more to do.
> And no, I'm not going to turn on precompiled headers, MSVC builds incorrect binaries about 5% of the time as it is.
Never heard that before - codegen bugs are always deadly serious and treated with high priority. If you have a repro, please share it.
It is never an either-or. It is always a complex mix of what customers ask for, what the strategic priorities/market realities are.
Often the problem with these queries is that there are not enough devs complaining to MSFT. No PM/engg manager is going to ignore a bug/problem if it shows up high in customer requests.
On promotions - I think it's the reverse problem. People only get promoted for working on something that's perceived to be hard.
Note that you switch between "devs" and "customer" there. I argue that the real problem is that for VS those demographics are different. Devs in "Microsoft shops" don't go out and decide on a complier, they use VS because that's all there is. And the decision to purchase that compiler is made for them by the suited management class who frankly don't care about C++11 support beyond what they see in a line item checklist.
Basically, the way feedback works in this world is with feet, not bug reports. If you build it they will come. If you don't they will leave.
On Windows there are lots of compilers to choose from.
They read in magazines the new shiny IDE has better team support and integrates seamlessly with Exchange and the SharePoint intranet deployed last year.
But let's be realistic here. Most of those shops will never hire someone to write C++ code. In all likelihood, they are still porting VB3 apps to VB.net.
The last time I managed to do a full greenfield C++ project at work was around 2005.
The enterprise has moved away from it long time ago and incoherence talk of Microsoft about going native does not help.
If they are serious about that I would expect proper C++11 support and improve ngen to the point I could use it as a real native code compiler. Not dumb UI changes.
>Basically, the way feedback works in this world is with feet, not bug reports. If you build it they will come. If you don't they will leave.
Is there any quantifiable metric to measure this so called exodus beyond anecdotes and the ".NET is dying" posts on here?
If there was an IDE for .NET on par with VS, it will definitely see uptake regardless of suits.
But that's really not the point. The upstream discussion was much narrower, and focused specifically on C/C++ support, which frankly sucks in the windows world compared to the renaissance we're seeing in Unix with our dueling multi-architecture full-support C++11 implementations.
I'm not nearly expert enough to comment on how good or innovative the .NET support in VS is, but I'm perfectly willing to believe it's great.
Your reasoning is really faulty because you aren't accounting for overall growth in the industry. If the world Irish population increases by 20% does this mean that the Asian population shrunk? Of course not.
Consider that Tablet and Smartphone platforms are in addition to flat PC sales.
Same goes for the world wide web in general. http://www.businessinsider.com/how-many-web-sites-are-are-th...
You can't say logically that because there are more rails sites there are less Microsoft ones.
C# is pretty popular here: http://langpop.corger.nl/
here as well:
And that doesn't count non-C# .net ecosystem languages.
Could there be an exodus? Sure, but your reasoning is just wrong here.
I use Visual Studio ship code to run in browsers every day.
That accounts for a lot of software out there.
And no, I don't like Visual Studio, but it's the best tool out there for the majority of the people I work with.
There are still internal limits in the PE32 format which mean parts of the output as a whole will be limited to 2 GiB or so, regardless of the address space available to the linker.
I too have hit internal linker limits from time-to-time, but the span of computing history in which it makes sense to have a 64-bit linker building a load module output with 32-bit internal limits to run on 64-bit processors doesn't seem like the place to be in the great scheme of things. Just my personal observation.
What hits the 32-bit limit is the link-time code generation, which has to have the entire program's AST, plus all the profiling information, plus whatever other data structures it's using in memory all at once.
For my custom Qt5 build, I've disabled /LTCG only for QtWebkit (32/64bit debug/release) and was able to deal with this limitation (at some other cost I guess)
Mozilla has never struck me as the most... forward looking.
* Edit: oops, I corrected a typo where I wrote "32-bit build".
So what are their problems exactly? That they are building with VS 2005 (from what I saw in that thread) and MS isn't backporting changes to the 2005 toolset?
(Mozilla is building with 2010 now.)
Also note that the problem still exists in VS2012, and in fact is even worse because the linker memory usage has gotten higher in general. Fortunately we now have a fairly good workaround (developed with help from Microsoft -- maybe some of the people in this thread?) that limits which files participate in PGO:
The following list describes the various versions of cl.exe (the Visual C++ compiler):
x64 on x86 (x64 cross-compiler)
Allows you to create output files for x64. This version of cl.exe runs as a 32-bit process, native on an x86 machine and under WOW64 on a 64-bit Widows operating system.
This 32-bit linker is apparently LARGE_ADDRESS_AWARE, so running it on 64-bit windows gets them 4GB to play with (which they are apparently rapidly burning through.) What they need is a 64-bit linker to run on 64-bit windows that can produce 32-bit output.
OS Linker Output Mem
1 32 32 32 3GB
2 32 32 64 3GB
3 64 32 32 4GB
4 64 64 32 8TB
I am baaarely familiar with Microsoft-world development.
No. "Customer complaints" are not a substitute for product design sense.
Any survey of your customers is inherently biased: Your customers, by definition, are people who were willing to buy the product. You're rarely going to discover your biggest problems by counting the number of customers who report them, because your biggest problems are the ones that make people unwilling to become your customers in the first place.
I think devs don't complain because there's not much point.
It took me 9 months and 45 calls to support to get a relatively simple regression in IE9/ClickOnce fixed and all we got was a fucking registry fix that we now have to ship to 2000 clients.
Basically they broke ClickOnce in IE9 for launching via scripts due to the new download prompting stuff.
Neither the framework team or IE team wanted to take responsibility leaving my poor support rep to reverse engineer both products.
Personally I would like to see proper C++11 support instead of playing around with C++/CX and while they are at it, either provide a native compiler for .NET or improve NGEN's optimizer for C++ performance level.
Given that Microsoft had .NET native compilers for Singularity and Windows Phone 8 .NET apps are compiled to native code, it isn't as if they don't have the tooling already available.
The main native language in Microsoft tooling is C++.
the kernel team is also pretty slow to adapt new C compilers and you can see this by watching the lag in compiler versions released in the driver development kit vs those released with visual studio.
Similar to what GCC did before changing to C++ as implementation language.
If mangling/decoration of symbols was standardized, and how to structure vtables, and how to handle exceptions, and etc. etc. etc. - then I can say "C is officially deprecated". Until then - C is still the lingua franca for native development.
Write in C++ all you want, but expose "C" interface - you would be accessible much easier from anywhere else.
It is called COM and since Vista most new Win32 APIS are actually COM based.
WinRT is also COM based.
> Until then - C is still the lingua franca for native development.
Not on Microsoft world. If they continue on the same route, your favorite language needs to speak COM.
Notice I also ranted about VS2012, Win8 SDK, Metro Firefox, and XP.
Not really a compiler, but more like linker/loader problem. If the __declspec(thread) is in the main .exe it works.
Things like this affect for example ANGLE (WebGL) when compiling for XP, as such one would have to resort to TlsAlloc/TlsFree.
It's just an example, where an OSS might've used the gcc/clang primitive for thread-safe var, and later someone does it for Visual Studio, only to understand later that it won't work when in DLL for XP.
Now I know XP is no longer supported, but a lot of OSS projects still target it. (and probably lots of others).
Then again, it's not such biggie, for example Elmindreda, the GLFW developer was really kind to re-implement the feature once I reported the problem.
I had my share of issues using xlc, aCC and sunpro cc.
- Popout explorer windows (So you can get the old style Pending Changes window, if you like)
- Make comments and discussions on code through web UI
- New release management plugin
- “Team rooms” allow your team to collaborate on code
"I'm OK with this" improvements:
- “Indicators” in code that give you info about blocks of code
- Cloud load testing
- More agile portfolio management options
- Test cases can be worked on through web UI
Mainly, as an example, I want to be able to have my NodeJS webservice project in the same solution as my MVC website project, and the C# dll client project. The NodeJS solution doesn't need all the assembly folders, or a .Net compile... but it does need the configured prebuild step, that's it. People say to use a "Website" project, but that brings my entire node_modules tree into source control. It's a pain.
It should be noted that features listed as included in the VS2012 Nov CTP, such as initializer lists, variadic templates, etc., should not in my view be listed. The CTP does not work via the Visual Studio interface, but only from the command line compiler. And the CTP is not included in either Update 1 or Update 2.
Meanwhile GCC and Clang are both feature complete:
It seems that the OP talked about what the blog's focus was first, then will come in later (maybe on a different blog?) to talk about anything else.
I am hoping they have made substantial improvements that are not mentioned in this blog post.
Of course, that is the opposite of what I as a software developer want.
A keyboard macro system that doesn't suck (e.g., is available and /as fast/ as the one in Emacs).
A way to compare project configurations without involving either XML or a sea of random dialog-like boxes.
The TFS and workflow features they are adding make me weep.
The VS keyboard macro system has "overengineered" written all over it. Really all they had to do was record some keystrokes. Instead they probably had a team of like four engineers, a bunch of Q/A and several PMs on it, and it took them over a year. And the result /stinks/; it blows dead exploding goats.
It's more latency than long waits but it's enough to stop me using it.
But we've got 62 projects. I've turned off a few things like code analysis in R# and it runs fairly well.
I'm underwhelmed with the desire to suck all activities into the one tool to rule them all. Especially when it rules them from bleh TFS.
I want VS to let me write code. I want to find and manipulate text. I want it to compile fast and produce relevant warnings. I want a debugger. I want it to host duiverse plugins. That's mostly it. ALM? I'm probably doing whatever it is that you mean by that and I don't care about integrated tools for it.
That's all I want Visual Studio for, too. Fortunately those things will continue to work and we can continue to ignore things like TFS and mstest.
If I could make VS just do those things and nothing else, that would be really nice.
This is not a Madden game, nobody is asking for yearly updates for Visual Studio.
If your employer is to slow to keep updated, why should that slow MSFT down for you while the it is trying to reinvent itself, just catch up in 2015.
Just because it's trendy to bash MS for not releasing new browsers/IDEs/whatever every few weeks, that doesn't mean the vocal critics speak for all of us, nor even necessarily for the majority. I get paid to build stuff that works, and doing so requires stable foundations and reliable tools. Lately, I wish we had more Microsofts in the world paying attention to things like backward compatibility and long-term support, and fewer Googles and Mozillas and Apples who are quite happy to push out updates that break useful things that worked before.
There was nothing wrong with releasing a major version when you had major new features, more frequent minor versions that introduced minor features in a compatible way, and point releases for urgent security issues or bug fixes that didn't change any intended behaviour at all. This approach has worked for a long time, and IMNSHO the recent regular, fixed schedules for updates regardless of actual changes aren't doing most people any favours, and the version-free, always-online, push-updates-whenever stuff is just a mess that causes far more trouble than it's worth.
Did you mean forward compatibility?
No, I really did mean backward compatibility. Specifically, I was thinking of the degree to which MS have supported even very old (by IT standards) APIs and file formats and protocols even in much newer (again by IT standards) systems over the years, and the lengths they have sometimes gone to in order to maintain that support despite the potentially breaking changes not even being their fault in many cases.
However, I would also agree that they are better at forward compatibility than a lot of other major software developers today.
So here is one reason. Another one - is money - why spend every (or two) years money on product that is not bringing that much to the table (for a lot of projects at least I've been)
Ahem... I am. I would much rather have smaller more frequent updates to important tools than huge disruptive ones. Short feedback loops are a good thing.
I agree with you but a Service Pack should suffice esp. right after you've release a full new version/update to a major product/tool like VS.
I'm pretty sure I can upgrade to a service pack. Changing versions, OTOH, probably requires approval from management.
Two things are apparent:
1) The problem you are addressing is a management problem in your organization, not a problem with Microsoft releasing a new version of VS.
2) You aren't the decision-making customer for Microsoft VS at your organization, so even the problem you had was with VS, it wouldn't necessarily be a problem MS would have much incentive to address.
That's what service packs are for. With a product that has as big a footprint as Visual Studio it will be impossible for shops to keep up with the annual versions. Many will simply delay updates. Ironically, increasing the speed of major release may actually delay the introduction of some improvements by customers.
I'm actually running 2012 personally, not such a big deal on that front, my side projects tend to be pretty manageable and not hugely large or sophisticated.
But yeah, you have anything large and enterprise-y, the pain just amplifies each version you are behind.
It wasn't that much of a problem. It took about 3 days and most of that was fixing deprecation warnings and porting the in house test framework to NUnit.
The real problem in the process was getting the build tooling and all the associated crap surrounding the solution up and running. This was also only because the muppets who wrote it originally put it together with sticky tape and string.
Nevertheless I have read some worries that some bug fixes (may) break backward compatibility and therefore projects targeting version 4.0 developed on machines with version 4.5 installed may behave differently when run on machines with only version 4.0 installed. If this is a real problem, that is if there are observable changes and therefore version 4.5 is not fully backward compatible to version 4.0, I can not tell.
UPDATE: There is a published list of breaking changes .
Note: The deleted comment was something like »This is not true«.
We have deviated quite a bit from the original comment - you can install different Visual Studio (and .NET framework) versions side by side if you have to support legacy projects. There are some case where this will not work - obvious examples are .NET 1.0 not being supported since Vista, .NET 1.1 since Windows 8 - but there are still a lot of cases where this works just fine. If an side by side installation is not possible, you can still just setup up a second machine or a virtual machine for you legacy development needs. Not being able to upgrade Visual Studio because you have to support legacy projects is mostly a non-issue.
Maybe Microsoft is tight on money. :)
Sadly it seems to be the way it is going. Most likely driven by Windows 8.1.
I'd rather see a 2012.3 version... I'd also like to see a LOT more stability, as well as a non-building project (for external systems) that still has a pre/post-build event, but no compile step from inside VS... (mainly for projects that use other runtimes/build systems but make sense to include in a VS solution.
I know everybody is hot and bothered about C++11, but honestly I'd suggest just getting proper support for things we've needed for a decade or so before chasing the new shiny.
ALM is that is where the money is. Higher-ups in orgs are much more willing to pay for oversight and management features than language features.
What is the motivation behind having 10+ bundle and service options? Is it to trick us into buying the wrong thing, then forcing us to buy other addons that we initially didn't realize we needed? Is it an attempt to maximize sales by offering tons of bundle options to extract the maximum value out of the variety of customer needs that exist out there?
I guess shops that are even looking to buy VS are so locked into VS that we will spend the cognitive energy to figure out where the best value is for our needs, but this current method just doesn't seem elegant or efficient to me.
The trick is that many users will opt for the most expensive option because they don't understand or are intimidated by the marketing / licensing material and therefore make the safest choice. If you don't know what you need ( and most large corporate shops don't) you buy everything.
Having lower priced options mostly serves to hide the true costs of the product.
Server Error in '/' Application.
Description: An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine.
Details: To enable the details of this specific error message to be viewable on remote machines, please create a <customErrors> tag within a "web.config" configuration file located in the root directory of the current web application. This <customErrors> tag should then have its "mode" attribute set to "Off".
<!-- Web.Config Configuration File -->
DO YOU NOTICE HOW VISIBLE MY COMMENT IS?
Thank god for http://blogs.msdn.com/b/zainnab/archive/2012/06/14/turn-off-...
It's definitely nowhere near as annoying as intellisense randomly breaking and putting a red line underneath half your statements.
There's a reasonably simple solution with which you can solve the red line problem, if you're talking about the C and C++ projects:
"Posted by JoeWoodbury on 25/02/2010 at 10:16 (...)
The solution is to add the location of the stdafx.h file to the include path list. This is often a matter of simply putting a dot comma (.,) as the very first item in a projects include list."
EDIT v2: Looking at the project you cited in the reply of this message and the file: http://code.google.com/p/freetype-gl/source/browse/trunk/dem... on the computer on which I don't have VS, I still have an idea what the reason for such errors are: the project authors use something like:
#elif defined(_WIN32) || defined(_WIN64)
I feel silly for never mousing over the three characters at line 62 where Intellisense describes the problem it's having.
I feel your paIEnumerable
It's similar to making notes by hand, I'll generally use all-caps when doing so.
The same issue happened with their forced grey or white color schemes. It's not hard to give users the option to switch things like this to suit their preference. Yet MS has deliberately not provided any way to adjust these settings, and this gives users something to rant about, and rightly so. Really not a smart move in my view.
I can std::thread(a,b,c,d,e,f,g) more than 5 arguments?