Sweet! Next stop, Microsoft should get officially have ZFS running on Windows (There is a port of it already)
There is some sort of a bitterness in the Linux community when people talk about ZFS or DTrace. But it's the consumer who is affected by it. For their sake, I hope they don't take the 'We have BtrFS/eBPF/X' path.
This. They really need a newer, more modern filesystem, especially since Apple has done away with HFS+ and has introduced APFS. ReFS is really dead in the water, and ZFS would give them a powerful, battle-tested filesystem which they could use as a real selling point for Windows Server. Also things such as Windows updates and system restore would benefit a lot from ZFS's native snapshotting capabilities.
Merging DTrace implicitly means they are fine with both the CDDL and Oracle's patents, and thus that ZFS wouldn't be an issue for them.
NTFS is in some ways arguably still more advanced than ZFS: think reparse points, alternate streams, transactions, BitLocker integration, and shrinkable volumes, to name a few.
Also, NTFS already has snapshots via Volume Shadow Copy.
ZFS is great but it’s not going to be a viable replacement as it exists today.
Anyway, the problem I've always had when using non-standard file system metadata like alternate streams or sparse files is that they interoperate poorly in practice. There's lots of code and programs out there that wants to treat everything like a plain old file and hence won't preserve alternate streams or sparseness. From my recollections this is true even for several of Windows's built-in utilities. Same issue if you want to transfer files over a socket or pipe or a non-NTFS storage medium like a FAT-formatted USB drive or a NetApp filer or a source control system. As a result this kind of thing is only really useful when deployed in a bubble; I believe WSL uses alternate streams for Unix permission bits and other Unix-specific file properties. Even though those WSL guest files live in a directory tree on a normal NTFS volume, they strongly caution you against touching them directly outside of WSL, presumably for the aforementioned reasons.
Aside from features, NTFS has some performance issues compared to other modern file systems, notably with lots of smaller files.
- For permissions WSL uses EAs; for capabilities it uses ADS
- I'm not sure if it's really NTFS that is slow with lots of small files or the I/O subsystem in general... I got the impression it's the latter but not sure.
It absolutely matters: a fair amount of software, both enterprise and consumer, already uses these features. Malware detection, for example, makes heavy use of reparse points.
Microsoft strongly recommends developers utilize alternative means to achieve your application s needs. Many scenarios that TxF was developed for can be achieved through simpler and more readily available techniques. Furthermore, TxF may not be available in future versions of Microsoft Windows. For more information, and alternatives to TxF, please see Alternatives to using Transactional NTFS.
I'm pretty convinced that documentation is BS. I could be wrong I guess, but I honestly don't see them removing TxF anytime soon. They didn't exactly build their business on breaking backwards-compatibility of APIs.
Did that change recently? I haven't used Windows since XP, but back then, they used Volume Shadow Copies (VSS) for making backups used for reverting changes, not NTFS transactions.
Is the MFT even documented and available via APIs on Windows? Could NTFS (at the expense of losing backwards compatibility in the file system (which already only extends to features that have been there before)) just change how the MFT works transparently for well-behaved applications (i.e. those that do not grovel around in undocumented data structures)? Admittedly, there may still be a number of applications that decide to »optimize« a system by digging around in parts that they have reverse engineered, and those might simply corrupt the file system when running on a newer NTFS version, but I guess that problem may have existed previously as well.
MFT is an internal structure. They could fix it TBH but the issue is that this would require an NTFS filesystem revision and for those of us who suffered through NT4's broken ass filesystems might start complaining :)
> think reparse points, alternate streams, transactions, BitLocker integration, and shrinkable volumes, to name a few.
All of these except shrinkable volumes (somewhat) are already part of ZFS, also assuming that you replace "BitLocker integration" with more general "encryption integration"
ZFS and NFSv4 were both designed to be able to serve the entire subset of NTFS features over the network, to interop with Windows computers.
Reparse points are basically more primitive directory-only symlinks, aren't they? Even the symlinks introduced in Vista are better...
As for ADS: basically xattrs. They can be arbitrarily named and each xattr on ZFS can be up to 16EiB large (same as the primary bin for file content). (As an aside, they aren't fully usable on Linux since Linux itself imposes a 64KiB limit on an xattr's content -- but that's not a ZFS limitation)
> Reparse points are basically more primitive directory-only symlinks, aren't they?
Absolutely not! Reparse points are directories that are associated with drivers that extend the capability of the filesystem itself. They can be used to implement symlink-like behavior, but that's only a single use case that scratches the surface of their potential power.
Thanks, I wasn't aware of that... terms seem to get interchanged in the Windows world often enough.
It doesn't sound like it is really an inherent property of the file system, but just controlled by a driver. I don't see why they couldn't be made to work on any other file system, such as FAT or ZFS.
Reparse points are directory entries that are special and interpreted by the file system driver (or some other higher-level component that's not the file system itself). Directory junctions (which you mean here, I guess) are reparse points, as are symlinks and a bunch of other things.
Though it might finally do away with the ever-persisting Windows error messages about "this file is already open" followed by a search through all of the processes that might have it open to terminate the offender.
That error is orthogonal to the filesystem in use. Every OS that provides exclusive locking, regardless of filesystem, will return an error if more than one process wants to acquire such a lock.
> Every OS that provides exclusive locking, regardless of filesystem, will return an error if more than one process wants to acquire such a lock.
Obviously.
The super annoying thing is when the OS doesn't try to tell you which process(es) keeps it open and doesn't even ship with built in tooling to let you find out on your own.
Certainly not a sensible solution like having a dialog button that gives you all the relevant information immediately and empowers you to solve the problem without a goose chase.
The way filesystem locking works on Windows is an intentional decision and not an NTFS thing. It's there for a good reason even if it's inconvenient. The alternative has some real downsides. "Two CMD windows have the same CWD but are showing different folders" is not a user-friendly experience.
I bet pretty much every Windows user would trade confusing command prompts in strange circumstances for not having to reboot for pretty much every single Windows Update (due to it being unable to update files that are in use)
It would be horrendous to even try to troubleshoot a system with a dozen different version of OS DLLs loaded because the system had not been rebooted for a dozen patches.
Or, imagine every copy of Word you have running is using a different set of binaries.
If a file is damaged by bit rot, NTFS will happily serve up garbage. So those are nice features, but viewed in that light they're little more than window dressing.
You say that as if bit rot is the black plague of data storage. In the past 20 years I have encountered 'bit rot' with NTFS... not once. I have encountered bad blocks and replaced drives once in a while by monitoring SMART status, which probably prevented the rot. It is definitely great to have protection against it, but to call other features window dressing when others have used them more often than they thought of checksumming feature is not objectively fair.
In the past 20 years I've lost multiple files to bit rot on NTFS. In my opinion, the primary purpose of a filesystem is to reliably store data. Checksumming is not particularly complex technically, does not impose significant run time costs, and allows the filesystem to at least detect when data has been damaged. Unless there's some other very compelling reason to not do checksumming, it's reasonable to consider it's omission to be a de facto bug.
Does this mean ext4 is bugged too? We many filesystems with this bug, and we lived with this bug for a while. Anyway, I don't disagree on the technical cost/benefit analysis, but still, unfair.
bcachefs is profiling itself to finally up the Linux file system game. It doesn't look like it will be upstreamed this year, but it appears to be usable and it's being done by someone who really seems to understand file systems and knows of the trade-offs different FS have made and how they played out.
NTFS holds up quite well when compared with more "traditional" filesystems such as XFS, ext4 and UFS (except the choppy I/O performance, but as someone else said above it might be more of an issue of the subsystems above it); it has much more functionality of almost anyone of those I mentioned above (for instance, I really don't understand why so few filesystems support transparent compression).
I think that filesystems like ZFS and Btrfs are simply one step ahead than anything else, not only in features but also in tooling and UX. Send/receive, CoW snapshots, checksumming are all tremendous features that I think would fit fantastically into the Windows workflow, if Microsoft really managed to integrate them well. For instance, I think a ZFS-based Windows update would be able to do away with transactions and rely instead on datasets to snapshot the system, apply an update and rollback everything if something broke, exposing to power users the tools to clearly understand and rollback their systems themselves by hand.
Obviously all of this can be achieved by rewriting everything from scratch as in house, specific platforms, but I just feel that continuing to replicate in NTFS/ReFS features other open and widely supported filesystems have had for almost a decade is just not the best way forward, and it's not really better for anyone.
>(for instance, I really don't understand why so few filesystems support transparent compression).
Does anyone actually use this on Windows these days? I'm guessing the reason so few filesystems support it now is because the gains aren't worth it for the types of files that take up most of the space on filesystems these days, such as video files. You can't compress x264 video with a general-purpose compression algorithm. Storage space is plentiful and cheap, and (general-purpose) compression yields little gains, and the stuff we're storing now is already compressed (images, video, audio, etc.).
I have LZ4 compression enabled on ZFS, and not only it perceivably improves performances on slow drives (due to the lower amount of I/O), it saves actual storage space. An average OS install on hard drive is full of things such as executables and text files which summed up can amount to several gigabytes. These files compress pretty well in my experience, and even if it only amounts to saving a few GBs, it can still be crucial on small SSDs.
My experience is that NTFS is quite slow compared to ext4 on my machine (both via Windows and via ntfs-3g). Particularly for operations around large numbers of small files like moving/deleting a .git directory.
I wouldn't say that ReFS is dead in the water, I just think they shifted the focus to supporting virtualization features first to enable S2D/SDN. ReFS is still being developed and enhanced, the work is mostly in server SKU: https://docs.microsoft.com/en-us/windows-server/storage/refs...
> Merging DTrace implicitly means they are fine with both the CDDL and Oracle's patents, and thus that ZFS wouldn't be an issue for them.
Mind that DTrace was ported by Oracle (at least partially) to Oracle Linux first. They haven't done that for ZFS (instead they created bttrfs, before owning Sun)
Heh, think about the bitterness in the OpenSolaris community (and the Sun diaspora). Way way back in S10, ZFS, DTrace, zones, and this other weird creature called SMF, they were all going places!
> For their sake, I hope they don't take the 'We have BtrFS/eBPF/X' path.
To my eyes at least they are pretty much helpless to do anything about it because of the licences. Do you have any ideas as to how they could work around the licencing issues ?
The licensing issue for Windows is different to Linux. ZFS as released by Sun was under the CDDL.
The source of incompatibility with ZFS and DTrace on Linux (until recently) was from the GPL, not from the CDDL. Windows (and, indeed MacOS - which has shipped DTrace for years), have no such restriction - the CDDL allows inclusion in larger proprietary works.
Not suggesting this should actually be done by other distros, but from what I understand, Ubuntu basically just ignored the license and included it anyways
They didn't ignore the license; they looked at it, and concluded that ZFS wasn't a derivative work of Linux and therefore it was safe to run CDDL modules on a GPL kernel. There is some disagreement as to whether this is actually valid, but it's no blatant disregard.
Honest question, who could claim to be damaged by someone running ZFS on linux? The closest I can get is 'unlicensed derivative work' regarding implementing a struct of function pointers, which is really contentious and unsettled (oracle v google, APIs copyrightable, etc).
I'm not a lawyer or anything close, but who would possibly have standing and motive to sue? There's, what, a dozen people at most who have copyright on those struct definitions? And API copyrightability is still unsettled?
Maybe the lawyers were too conservative, and the engineers who listened to them were applying the wrong model?
> To my eyes at least they are pretty much helpless to do anything about it because of the licences. Do you have any ideas as to how they could work around the licencing issues ?
DTrace has a linux port which is GPL licenced since last year. So they are not helpless and there is nothing to work around.
Sun decided a long time ago it does not want it to be part of Linux, as per its license. It's not us being bitter, I just don't understand why we should get out of our way to accommodate somebody who doesn't want to be on Linux.
Personally, I'd rather see more resources poured into bcachefs[1].
> Simon Phipps, Sun's Chief Open Source Officer at the time, has stated this is not the case:
Of course he did. There's nothing to gain from admitting it. It's the sort of statement akin to 'Facebook deeply values your privacy', as in of course they're going to say that.
On the other hand, submitting the license to the OSI, but not the FSF to be consulted on compatibility before using it suggests otherwise.
Per Phipps in September 2006, a few months after DebConf (July 2006):
"""
>
> It would be a critical mistake to underestimate her knowledge on
> the CDDL
> issues.
Nonetheless she is wrong to characterise the opinion of the Solaris
engineering team in the way she does. She is speaking this way
because she lost an argument inside Sun, not because her view is
representative of the views of Sun or its staff in the way she
claims. She, along with many actual engineers, was an advocate of
using GPL for OpenSolaris but the need to release rather than wait
for one of {GPL v3, Mozilla license revision, encumbrance removal}
meant that this was not possible. I am still furious with her for the
statement she made at DebConf, which was spiteful and an obstacle to
a united FOSS movement.
"""
Cooper is, AFAICT, the only one that made the claim: was it ever corroborated by anyone else? Phipps and Cantrill dispute it: are/were there others?
Edit: it seems that not even GPLv3 (which they considered) is not compatible with GPLv2 (which Linux uses), so even if they did that it would be a problem. One main thing (IIRC) that was of concern was patent license grants.
The "she, along with many actual engineers" is referring to the desire for GPL; I do not doubt there were. The "are/were there others?" is asking for corroboration on the claim that CDDL was designed specifically to be incompatible with GPL.
The CDDL-GPL drama seems to be one of an attitude of "fuck-ZFS because fuck-Sun/Oracle and their fucking CDDL license".
The validity of the claim that the design of CDDL was 'malicious' is the thing I am skeptical about.
It may have ended up incompatible because of technical-legal reasons, but was that the intent? Cooper says yes, Phipps/Cantrill say no.
Sun did submit the license to the OSI for consultation/approval. It could very well have consulted the FSF/FSC, but they did not. This to me suggests they did not care about GPL compatibility.
I don't understand this attitude. Why the love for Microsoft, after all the abuse it's shown towards other players in the OS market, all the monopolistic behavior, all the ...
There have been better, cleaner, leaner, FOSS operating systems out there for decades, why support, or even give mindshare to something like Windows?
These "Linux isn't ready for the desktop" articles are mostly BS, and just keep repeating the same, tired garbage arguments. I'll refute a few points:
Notice that a lot of this one is about NVIDIA. The solution is simple: don't use it. Desktop Linux works great on Intel and AMD, where the driver support is excellent. NVIDIA isn't the only GPU vendor out there.
Printers work much better in Linux than in Windows, and it's been this way for a very long time. The exception is cheap, crappy inkjets. Solution: don't use cheap, crappy inkjets. Inkjets are a scam.
Games are always on this list. I don't care about games.
Many of the other points in your list are completely obsolete.
For business use, most of these factors are a non-issue. Being able to play AAA games is not important for a business computer, nor for many (most?) home computers. I know gamers can't conceive of this, but not everyone plays recent AAA video games.
- ignoring that most business runs on MS products and compatibility issues are a thing
This attitude is one reason that was not on the list. It's one that one should not forget though as it hits hard for those who are determined to make it work. They found themselves confronted by a self centered community of semi-gods who think the world should arrange around what they assume it is.
Nvidia isn't the cheapest option by far, so no, you should change this to "spend LESS money". If you're talking about printers (your post was poorly written and didn't refer to anything point-by-point), then yes, and also "spend more money to save money". Inkjet printers are a scam, full stop. They cost a LOT of money in consumables. You will save a lot of money by buying a laser printer instead. It's really that simple. And every laser printer works just fine in Linux. There is no good reason to buy a <$100 inkjet printer unless you really want to be ripped off.
>- I don't care about it
Yes, and lots of other people don't as well. If games are the only thing keeping you on Windows (and I've talked to a bunch of people for whom this is the case), then stop complaining about all the problems and spying in Windows: you're making a choice to enjoy a luxury, and put up with things you find abhorrent, just because of your addiction to games. I know this is news to a lot of younger techies, but not everyone cares about modern AAA games. I guess I shouldn't even bother writing this because apparently this concept is so utterly foreign to them, but it's true. There's large swathes of the computer-using population (which these days is most of the population) that don't buy or play video games, and haven't since they were kids. Amazing huh? So no, the fact that Linux can't play every single AAA game just isn't an issue to them.
>- something undefined is obsolete
I have no idea what you're referring to here, perhaps you could try being more descriptive.
>- ignoring that most business runs on MS products and compatibility issues are a thing
In my experience, all my work is in Linux, except for MS Office and Outlook, and companies are wasting a lot of money buying me a separate computer (or VMware licenses etc.) just so I can read email on Outlook.
That's not the point here. Usually people have problems with Linux and Nvidia because they already have Nvidia in their computer. So changing hardware for Linux to make it work properly involves spending more money. Same goes for peoples printers. And no it's not just about inkjet. This is about drivers and unforeseeable issues you face if you switch and I know what I'm talking about since I made my parents laptop switch over to Mint and been confronted with issues on their laser printer...
> Yes, and lots of other people don't as well. If games are the only thing keeping you on Windows then stop complaining about all the problems and spying in Windows
I'm not sure if you misread the thread here. The original argument was about someone complaining that people have "love for Microsoft". Nobody was complaining about anything you bring up there.
> you're making a choice to enjoy a luxury, and put up with things you find abhorrent, just because of your addiction to games.
So what you did here was to blow out a straw man to infinity. Who's talking about addictions? Not everybody who wants to play a game after work or from time to time is addicted. What is this generalizing overdramatization?
> I have no idea what you're referring to here, perhaps you could try being more descriptive.
I didn't know what you meant by obsolete. That's why I wrote that. If you don't know what's obsolete, it wouldn't be much of a surprise to me considering the way you crippled the other points above but it's still kinda weird.
> In my experience, all my work is in Linux, except for MS Office and Outlook
So I guess you just don't have much experience then.
It's not just office and outlook. It's a endless amount of small programs in different regions of different businesses and hardware. There is a huge world out there beyond your narrow world view of addicted kids, inkjet printers and outlook. The fact that you still think you can impose this narrow worldview on strangers enforces the point I've made at the end of my previous comment. You know, the one you ignored. You might want to sit down and think about this point alone. It may give you a hint why people like you are not helpful to the whole migration away from Microsoft. You are rather another reason why it's happening slower for some or not at all for others.
>Usually people have problems with Linux and Nvidia because they already have Nvidia in their computer. So changing hardware for Linux to make it work properly involves
Ok, then suppose you decide you want to run MacOS. Do you think it's unreasonable to have to change your hardware for that? Then why do you think Linux should be able to magically run perfectly on all PC hardware out there?
>Nobody was complaining about anything you bring up there.
The Win10 spying is the #1 complaint I hear from people these days about why they're "really" going to switch to Linux this time.
>Not everybody who wants to play a game after work or from time to time is addicted.
Yet the anti-Linux people constantly bring up games as some huge reason that desktop Linux "just can't work" and is "completely impossible". No, not everyone plays games, but people like you just can't conceive of this, can you?
>So I guess you just don't have much experience then. It's not just office and outlook
I have decades of experience, unlike you apparently. In all my engineering jobs, the only thing Windows is used for is Office and Outlook, and maybe running a hypervisor.
> Ok, then suppose you decide you want to run MacOS. Do you think it's unreasonable to have to change your hardware for that? Then why do you think Linux should be able to magically run perfectly on all PC hardware out there?
I think it's unreasonable to change to Mac at all but that is a completely different topic regarding a walled garden.
I don't think Linux has to run on all kinds of hardware. Actually my argument was that Windows does that and that's one reason why it's everywhere and it will stay so for quite some time. You should read the comment you've originally answered to again.
> The Win10 spying is the #1 complaint I hear from people these days about why they're "really" going to switch to Linux this time.
The narrow group you are talking about here are also not the topic here. As I said before: Windows is everywhere. In regards to your argument here: most people don't care or don't even know about this.
> Yet the anti-Linux people constantly bring up games as some huge reason that desktop Linux "just can't work" and is "completely impossible". No, not everyone plays games, but people like you just can't conceive of this, can you?
Look at you. Just one comment before they were all addicts now everybody questioning the actual usability, compatibility and all those other issues form my first comments link are "anti-Linux people" and of course everything is my fault now...which brings us back to the topic of attitude. You know, that issue that you keep ignoring.
> I have decades of experience, unlike you apparently. In all my engineering jobs, the only thing Windows is used for is Office and Outlook, and maybe running a hypervisor.
Just like above you've displayed you extremely narrow experience and or world view and you keep doing it.
I work for a huge worldwide engineering and architecture company and we have all kinds of programs that have been written for and run on Windows. For example software that collects and manages logger data, ever heard of AutoCAD? Revit? I mean, seriously how could you in your DECADE OF EXPERIENCE never heard of that? Everything around Sharepoint. Everything the architects design for VR. Also client ports and client software is exclusively Windows in this field. Ever worked for any branch of the military or state entities? All this potentiality increases if you look at our subs and their subs.
So I guess your decades of experience are either in a very narrow field or you are ignorant or you just lied to me and hoped to get through with that. None of those options shad a good light on you and the Linux migration movement. As I have mentioned above: you are hurting it.
Yes remembering this makes me think twice to switch over to Linux again from macOS. On the other hand Windows, BSD and Solaris run on far less devices...
To run native containers, they've implemented almost all Linux kernel namespaces (IIRC mount was almost there already) and resource limiter. So, if there are some profits on the horizon, that may become a reality.
It seems to be on a project which has virtually no contributions since last august.
In addition, I'm not sure where Microsoft is going with this: your DTrace scripts probably won't be portable anyway. The only common point is the language. Is that sufficient ?
Depending on what and where you are tracing, scripts likely already are not compatible between different operating systems which support Dtrace - FreeBSD, Solaris and Illumos, and MacOS all have different system calls (for example).
That said, there are _many_ points of commonality which will apply to Windows too - USDT probes, function boundary tracing etc.
Do they need to be portable/compatible? If MS wrote their own version/alternative, then it'd be incompatible with everything. This way they get a load of high-quality code for free, and even if it's not directly compatible with existing uses at least the general framework is known by existing users and there's existing documentation so they don't have to do the whole thing from scratch and train all the users themselves. API compatibility would help, but there's value without it.
Only time will tell, but it can't hurt to be language compatible. All the unix engineers out there who depend on dtrace will find it more comfortable to use.
On the other hand, using a foreign syntax can hurt too, had they used something that looked more like powershell for example, it might have been easier for Windows admins to get accustomed to.
As a BSD user w partial dtrace support[0], very interested to see what impact this has in the instrumentation world, considering Linux has gone w eBPF[1] for their workalike implementation.
the scope of DTrace far exceeds that of procmon. procmon will show you params and return code for file accesses, and provides filtering, but that's the limit.
whereas DTrace could tell you "these are the file accesses that took longer than 5ms" or "here's a histogram/heatmap of file access durations". DTrace can correlate — file accesses by a given process in-between a given function's entry and exit — telling you _why_ a file was accessed.
well, maybe you're just asking how performant DTrace would be at achieving the same output. sadly I can't answer that.
I'm happy to see the changes, and more than happy to consider Azure and even like Azure Devops... I'm still a bit more comfortable on a Mac or Linux environment. There are some odd windows-isms for bash on windows, or the LSW, or Docker Desktop, etc.
I'm hoping a lot of it gets more solid in the next couple years. As it stands, I'm happier to see .Net Core running outside windows more.
Anyone operating a cloud of remotely interesting size will need DTrace, or, if they use Linux, eBPF tracing. It was a foregone conclusion that Microsoft needed DTrace.
Microsoft surely also needed it for other reasons. Developers, sysadmins, and power users -- all need DTrace or DTrace-like tools.
We're finally leaving the dark ages of unobservable operating systems! Hooray!
We should all give huge thanks to the creators of DTrace, Mike Shapiro, Bryan Cantrill, and Adam Leventhal, as well as Brendan Gregg and all of the crew at http://dtrace.org.
I'm sure scratching their own itch was a significant part of this. DTrace is fantastic, and Microsoft is one of the largest server operators in the world so the need is clearly there.
Indeed. But sun is oracle now. So yeah oracle is to blame. They have literally fired an employee working on zfs for saying oracle should gpl it.
There, one more reason to hate oracle.
Just think about it for a second. It’s ridiculous. Oracle has plenty of open source code. Merely suggesting that something be open source could not possibly lead to termination. There is obviously way more to that.
Whatever reason he was fired for might be unjustified, I have no idea. But it was absolutely not due to suggesting that they open source something.
If you’re basing that on this tweet, not only does it come well short of the claim you’re making, but it clearly indicates that Cantrill is going on third party hearsay.
i think eBPF is theoretically faster. eBPF is verified statically while dtrace loads/writes are all checked dynamically.
i guess eBPF verifier is a bit scary because a small bug anywhere in the verifier creates a big hole. i think project zero found a lot of bugs in the verifier.
its pretty scary putting an interpreter in the kernel. i'm not surprised linux didn't want to have two completely different interpreters in there.
dtrace is now gplv2 (2018, ~10 years after the acquisition), it wasn't for a long time (and its still a separate license around userspace), because oracle was holding it as a competitive advantage I presume for unbreakable linux distro (they ported dtrace over to unbreakable linux ~2012), the rest of the linux community continued with other projects (system tap, ktap, lttng, ftrace, etc), before settling in over the last 24 months onto building onto of bpf (see https://github.com/iovisor/bpftrace for a high level dsl interface), at which point oracle GPL'd dtrace, ie when it held no value, because everyone had moved on.
You're deluding yourself if you think the CDDL hasn't been a major hurdle against ZFS success in the Linux world. ZFS isn't successful in Linux because the CDDL doesn't matter, it's successful despite the fact it does matter because people put a lot more work into it than otherwise needed and work with the licensing and packaging issues involved with that. But saying "it isn't a real problem" is just factually false, no matter how much you don't like that.
If the CDDL simply didn't matter, after all, they could just add it to the source tree and be done with it. I wonder why they haven't done that...
ZFS is still problematic, specifically because of the GPL-incompatible license. Various distros have found ways around that (or are deliberately ignoring those issues at their own peril), but it's still very much "a real problem", and it took years (if not decades) to get to the point where ZFS on Linux is actually as viable an option as it is today.
The BPF people cannot be blamed for not using CTF. Even if it had been relicensed to the GPLv2 way back in 2005, until recently (https://github.com/oracle/libdtrace-ctf release 1.0 or 1.1) it was impractical to use CTF on larger projects because the file format could only encode a strictly limited number of types (2^15 in each of a parent and child container): it had a lot of related limitations as well, but that was the big one. This is not enough for a largeish enterprise kernel, even assuming that you share types used by multiple modules to reduce the overall type count. Also, BTF and CTF serve rather different purposes: CTF specifically encodes knowledge about C types, while BTF is specialized for encoding information about BPF maps. You can't use CTF for that: C doesn't even have a map type, nor anything like one, and the bits CTF spends on things like the details of floating-point formats are wasted on BPF.
As for the other part... obviously, as someone working on DTrace and using it ever more, I think it does hold value in its own right. It operates at a different level of the stack from BPF, in any case: it's a user-facing tool like a kernel-level awk, which is nothing like BPF.
In my opinion, saying that DTrace doesn't hold value because of BPF is like saying that C doesn't hold value because ARM assembly language exists as well as x86 (in this metaphor, C == DTrace, ARM assembly language == BPF, x86 assembly language == DIF, the DTrace intermediate format). It certainly seems possible to replace the DIF portion of DTrace with BPF, but this will not obsolete BPF nor DTrace: instead, DTrace will drop DIF and the DIF interpreter and build on BPF, generating BPF instruction streams the way it now generates DIF instruction streams. We get to improve BPF if needed and drop a redundant interpreter and BPF tracing gets a hopefully-nicer user interface in the shape of DTrace, and wider usage. Win-win!
(I'm not doing most of the work on this, so my opinion is far from authoritative, but I did do a preliminary experimental conversion of the hand-rolled DTrace code generator to emit BPF instructions instead, and it seemed perfectly practical: the two encodings are remarkably similar, and BPF is pleasant to generate, as such things go. That's only a small part of what needs doing, of course...)
It didn't used to be available under GPLv2, it was originally under the CDDL until late 2017 when Oracle relicensed it. This was years after the Linux kernel community had been working on other solutions that didn't have that restriction.
Do you feel it? The tectonic plates of developer sentiment slowly shifting below your feet. It won't all come in a wave. It is the gradual investment in tools like this that over time amount to a viable alternative to the MacBook Pro running macOS. WSL, Open source .NET, Investing in making Edge work like Chrome (more important to developers and consumers than your ideological quandries with Google owning the web will ever be). The last few years have been great for MS
Microsoft has had an enormous developer base for decades. Their changes aren't beating competitors, they're simply staunching the flow away from the Microsoft stack. Claiming it as a victory when Microsoft cedes that the enemy has won in a variety of ways is...an odd victory dance. Quite a wave.
I use a Windows 10 desktop for 1/2 my work. The other half I use macOS on a MacBook. They're both great (and the latter is fantastic for virtually any sort of work, whereas with the former I am constantly encountering Windows-specific limitations that demand that I fire up a Hyper-V instance --- WSL is very limited in a variety of deadly ways).
I get the impression that there's a lot of disagreement amongst developers regarding Microsoft's true intentions. I see three main clusters at the moment:
(1) Those who consider Microsoft to be reasonably trustworthy based on recent behavior.
(2) Those who will need Microsoft to continue its current behavior for a longer period before they're ready to trust.
(3) Those who consider Microsoft's current behavior to be at best a partial improvement over the past, or at worst a charm-offensive meant to distract from their ongoing bad behavior. E.g., Windows 10 with unavoidable snooping, and ongoing stealth patent attacks on Linux.
Not if you are developing for linux. macOS greatly simplifies that over windows. Of course you can develop for linux on windows but the third party tools that you need to do that are much more complicated and clunky.
Ive seen Linux devs on MacOS fall down several traps of thinking their NetBSD based MacOS is equivalent to Linux. One common pitfall I see is with filesystem access. Mac, by default, has case insensitive filenames (like Windows), opposite of Linux. Yes Mac supports case sensitive, but it is not the default and apparently a lot of apps dont take this into account and fail spectacularly when case sensitivity is turned on (I can't provide any examples, I dont use Mac, its just anecdotal from reading online).
This was a while ago, but when Steam was first introduced on macOS, it did not play nice with my case-sensitive HFS+ partition.
When I first set up my Mac, I thought case-sensitivity would be a good idea just in case I ever had to copy files from another case-sensitive file system. I never did need the case-sensitivity, but I did have to repartition to get Steam working. It thought me a good lesson about switching away from defaults "just in case."
You may be new to "embrace, extend, and extinguish" [1]. Everything looks great during the embrace phase. In fact, everything looks even better during the extend phase. Industry has a short-term memory for this tactic and it does seem to be going in Microsofts favor at the moment.
I'm curious to see how they will act once they regain sufficient market share. Will the old MS appear and will they implement the extinguish phase?
The "Microsoft has changed" is a well coordinated marketing ploy, probably their biggest achievement of the last 10 years. And these days it's actually pretty hard to distinguish the shills from the fanboys that genuinely believe that they changed.
There's something about computing that makes us want to cheer for these big companies as if they are sports teams, no matter how many times they screw us.
But for the uninitiated the biggest problem for some time has been that Microsoft was and still is a big patents troll. They also continued fighting open standards such as the OpenDocument format, even under Satya Nadella. To their credit they did join the Open Invention Network, but it's still unclear what parts of a Linux OS are covered and what aren't. The FSF rightly asked for clarifications on whether the multimedia components are covered and to my knowledge there have been no clarifications thus far.
> "Investing in making Edge work like Chrome (more important to developers and consumers than your ideological quandries with Google owning the web will ever be)"
That's incredibly naive. First of all they aren't "making Edge work like Chrome", that's an overstatement, the new Edge is going to be only a shell on top of Chromium and nothing more.
Microsoft adopting Edge is either them throwing the glove and admitting that they can't develop a browser, or them pulling another embrace, extend and extinguish. After all, as we've established already, they don't really like that users have a choice when it comes to the browsers they use. This is important to mention, because even for the Chrome fans out there, Edge switching to Chromium provides the perfect opportunity to disallow alternatives on top of Windows. This is just a possibility mind you, when it is far more likely that Microsoft will simply place Edge on life support.
In either case, whether they end up contributing to Chrome's ecosystem, or not, consumers lose for a myriad reasons. And heralding this move as something good is ignorance for
the whole history of computing.
MacOS X has definitely not "always" had DTrace - it shipped in 10.5 along with the "Instruments" front-end. It's true it is becoming distinctly less useful on OSX though - a shame as I use it extensively as an (application) development tool. This might make me take a slightly more serious look at Windows for laptops in future.
It is true for everyone. DTrace source was released in Jan 2005, and Mac OS X 10.4 was released in Apr of same year. So they must have ported it immediately and ship it in the next release (10.5)
WOW64 is a 64-bit "wrapper" for 32-bit processes. You can see this when you debug a WOW64 process in WinDbg[0]; so, the system is agnostic about it's "bitness" because it "appears" to be an x64 process but it's really 32-bit, just wrapped. Thus, the posit. :)
I wonder if they backport dtrace so it can be used on older Windows Servers (2012 for instance)? Or maybe they can somehow make it work in .NET Core only for such operating systems - it could still be very useful.
Not too likely. The hard part of implementing dtrace isn't porting over the program itself, it is the huge number of OS kernel hooks required to instrument. Those are unlikely to be back-ported. The really interesting question is now that they have the hooks, will they consider making additional interfaces with their own languages (powershell anyone?)
There is some sort of a bitterness in the Linux community when people talk about ZFS or DTrace. But it's the consumer who is affected by it. For their sake, I hope they don't take the 'We have BtrFS/eBPF/X' path.