
New OS X uses Windows file sharing by default - nkhumphreys
http://arstechnica.com/apple/2013/06/new-os-x-uses-windows-file-sharing-by-default/?utm_source=feedly&utm_medium=feed&utm_campaign=Feed%3A+arstechnica%2Findex+(Ars+Technica+-+All+content)
======
rogerbinns
SMB has an extension mechanism and SMB 1 has support for Unix extensions for
over 15 years - I was the author of the original Unix extensions spec. You can
get full Unix semantics using them (links etc).

The predominant form of extension is an "info level". Somewhat analogous to a
data structure like that returned from stat, the numeric info level controls
what structure is returned (or supplied). Microsoft had a tendency to add new
info levels that correspond to whatever the in-kernel data structures were in
a particular release rather than longer term good design.

The general chattiness comes from their terrible clients like Windows Explorer
(akin to Finder for Mac folk). I once did a test opening a zip file using
using Explorer. If you hand crafted the requests it would have 5 of them -
open the file, get the size, read the zip directory from the end of the file,
close it. Windows XP sent 1,500 requests and waited synchronously for each one
to finish. Windows Vista sent 3,000 but the majority were asynchronous so the
total elapsed time was similar.

I worked on WAN accelerators for a while where you can cache, read ahead and
write behind, in order to provide LAN performance despite going over WAN
links. In one example a 75kb Word memo was opened over a simulated link
between Indonesia and California. It took over two minutes - while
instantaneous with a WAN accelerator. The I/O block size with SMB is 64kb so
they could have got the entire file in two reads, but didn't.

If anyone is curious about what it was like writing a SMB server in the second
half of the nineties I wrote about it at
[http://www.rogerbinns.com/visionfs.html](http://www.rogerbinns.com/visionfs.html)

~~~
michael_miller
Do you know the cause of the 3k requests which Vista made? Do you have a sane
theory why these were occurring? Also, do you have any suggestions for better
clients to use?

~~~
rogerbinns
> Do you know the cause of the 3k requests which Vista made? Do you have a
> sane theory why these were occurring?

Backwards compatibility and layers of indirection.

Microsoft has always made great efforts for backwards compatibility - Raymond
Chen's blog is a good source of stories. Quite simply if you upgraded Windows
and apps stopped working then you'd blame Windows. Of course it is almost
always the apps relying on undocumented behaviour, ignoring documentation,
relying on implementation artifacts etc. This means a lot of code to detect
and work around problems in other components. For a networked filesystem
client the simplest way is sending lots of requests and picking results of
interest based on what comes back. Networked filesystem servers also work
around client problems in various ways - eg they may return smaller block
sizes than the client requested because it is known to have occasional
problems. All of this builds up layers and layers of workarounds, workarounds
to workarounds, having to test against OS/2 etc. SMB2 was an attempt to wipe
the slate clean (no more OS/2!) but of course the crud starts building up
again.

Explorer isn't a program that displays files and directories despite
appearances. There are layers and layers of abstractions, parts provided by
COM etc. The code that knows it wants to display the listing of a zip file is
many layers away from the code that generates network requests. It is always
easier to write code that does more than strictly needed than the absolute
minimum necessary.

------
onedognight
> Time Machine, only works over a LAN with destinations that support AFP. This
> is at least in part because of Time Machine's reliance on Unix hard links,
> and also in part because it has to be able to ensure that any OS X files
> with HFS+ specific metadata are correctly preserved.

This is not the reason. Time Machine does support hard links, legacy Mac
metadata, and other Unix features. It does this by writing all the data into
large blobs (a sparse bundle) with an embedded filesystem of its choosing
(i.e. HFS+). It can use any destination filesystem for the blobs, including
FAT.

~~~
__david__
In particular, Time Machine makes large use of hard links to _directories_ ,
which not many filesystems support. With HFS+ Apple can be sure that support
is always there.

~~~
deathcakes
Actually I think you'll find it makes use of hard links to files. Its
basically a reimplementation of rdiff-backup, or it might be the other way
round. I can assure you that no directories get hard linked, and I'm sure
someone will furnish the obligatory xkcd.

Edit -- I stand corrected! It does in fact link folders as well. Also:
[http://xkcd.com/981/](http://xkcd.com/981/)

~~~
astrodust
It hard-links directories, which is non-standard but supported by HFS+. It's
kind of crazy to do in general, but in this specific use case it's a great
idea.

~~~
deathcakes
Agreed - having read up a bit on how it works I have to say that its a pretty
neat trick.

------
apitaru
Finally. Someone at Apple must be a Bukowski fan. I'm reminded of his poem "16
Bit Intel 8088 Chip" (not his greatest, but suitable):

[http://bukowskiforum.com/threads/16-bit-
intel-8088-chip.2791...](http://bukowskiforum.com/threads/16-bit-
intel-8088-chip.2791/)

~~~
cpach
I had never heard of that poem before, I think it’s awesome :)

------
zwieback
Back in the early nineties I worked at Miramar Systems on an AFP server and
actually a full AppleTalk stack that ran on Windows 3.11 (VxDs!) and OS/2\.
Macs could run full AFP and whatever the printer protocol was called to a
network of PCs.

IBM sold a version of our stuff that was called LanServer for Macintosh so
back then Macs and AFP were covered!

It was quite a popular product at the time. Although I never enjoyed working
on Macs I thought that AFP was pretty cool. We all had "Inside AppleTalk"
pretty much memorised - what a great book.

------
codex
I would have preferred NFSv4 over SMB2. They are quite similar technically,
but the former has less chance of veering off into supporting strange
Windowsims which will be hard to translate to a POSIX client. That said, SMB2
is widely deployed and Microsoft is innovating in SMB faster than NFS is
improving.

Fortunately OS X does not use Samba as their SMB2 client.

~~~
mhurron
Most users are going to have a Mac and a Windows machine, SMB makes far more
sense. You're going to see NFS in enterprise situations and Apple does not
really aim there target there.

~~~
mitchty
Not only that but nfsv4 has its own issues in regards to userid/gid mapping.

Setting that up with kerberos is... not fun (speaking from the
solaris/linux/aix side of things).

Not that smb would be better for most unixes mind you, just that nfsv4 is its
own version of hell in some ways.

------
velodrome
This is great. I can finally interoperate with linux and windows.

Every time I connect with AFP, my CPU would spike to 100% under Ubuntu.

~~~
lysol
This isn't new functionality, it's just that SMB2 is now the default.

~~~
velodrome
I heard it was kind of buggy in 10.8.x.

~~~
deathcakes
Buggy is not nearly strong enough - supporting lots of mixed environments has
given me ragequit levels of stress, precisely because Apple dropped samba and
decided to write their own.

~~~
alayne
Why are people doing so much peer level file sharing anyway? Performance?
Security? In a company it would be a lot better to have centralized servers
with high availability, probably with some kind of web-based CMS to store
files, something like Confluence or Sharepoint.

~~~
elithrar
> Why are people doing so much peer level file sharing anyway? Performance?
> Security? In a company it would be a lot better to have centralized servers
> with high availability, probably with some kind of web-based CMS to store
> files, something like Confluence or Sharepoint.

You have to assume that the majority of Apple's users are home/edu/SMB users,
who don't have centralised infrastructure.

They just want to share files between each other, or from a small NAS. Those
who have Macs in an enterprise environment likely have a working solution via
other methods.

------
inthewind
Can someone chime in, with the pros and cons of each network filesystem. And
which is a good fit for Linux - or rather for those OSs that don't need to
cooperate with Windows? Was NFS ever updated - or replaced? How much of SMB is
now open after court rulings? And is their one that is technically better than
another?

~~~
nvr219
Use ReiserFS

~~~
inthewind
Is ReiserFS a network file system?

------
sytelus
OS X's interchangeability with PCs is actually more badly broken than this.
This is mind boggling because if Apple can get this one thing right more
people would be willing to buy Mac Mini and put on their home networks. I
recently tried to use external device full of NTFS formatted hard drives on
Mac Mini. First thing I discovered was that OS X can't natively write to NTFS
formatted drives. Even after you discover and purchase 3rd party apps that
enables writing to NTFS formatted volumes, OS X can't share them via SMB. This
is because Apple's own SMB implementation that they tried to replace is
broken. So you have to disable that and install open source SMB anyway. There
are quite a bit of hoops to accomplish this.

So there is no built-in way to share your external drives connected to Mac
Mini on network if they are NTFS formatted.

------
shinratdr
I'm hoping this results in vastly improved SMB support, which I am in full
agreement with other commenters, has been infuriating since Apple decided to
roll their own. I frequently hop to my Windows machine to manage my Windows
Home Server even though I'm just doing simple SMB communication and file
cleanups that should work fine in OS X, but don't.

------
polshaw
Related: I take it there is no maintained open source SMB server that isn't
GPL3 these days? Sucks since apple abandoned samba2. How stupid would it be to
use apple's old samba2 for an appliance? (guess: very?)

~~~
lmm
Can you not just install modern samba yourself?

~~~
icebraining
I'd guess polshaw wants to sell/distribute an appliance containing proprietary
software (hence the reluctance to use a GPLv3 licensed component), not just
install it on his own device.

~~~
jra_samba_org
No, polshaw sounds like he has religious reasons against GPLv3 (as do Apple
:-).

There are many people shipping Samba on an appliance containing proprietary
software, there is no problem doing that with GPLv3 code.

~~~
polshaw
Hi there, hopefully you will still see this..

No religious objection. The problem for me with GPLv3 is that it is not
compatible with (privately) signed code. If it is possible to run unsigned
code on my appliance then my proprietary code would not be secure, putting the
entire business in jeopardy. If you can square this circle then I'd love to
use it.

I'd be interested to see a list of shipping appliances (meaning not open
hardware platforms) with GPLv3 if you know of any.

~~~
lmm
>No religious objection. The problem for me with GPLv3 is that it is not
compatible with (privately) signed code. If it is possible to run unsigned
code on my appliance then my proprietary code would not be secure, putting the
entire business in jeopardy. If you can square this circle then I'd love to
use it.

Your code is still under the full protection of the law. And no signing
mechanism will prevent a competitor from simply dumping the flash and reading
your code off there, if they really want to - if anything this is probably
easier than running their own code on the system. So I don't see what using
the GPLv3 changes.

If you're really paranoid, how about running samba in a chroot/jail/etc. where
it has access to the data files it needs to serve/store, but not your code?
(Your code can operate on the same data from outside the chroot). As long as
you make it possible for the user to upgrade samba (which should be fine - you
don't care what code runs inside this chroot, because it only has access to
the same files the user could access via samba anyway, so the samba that runs
in the chroot doesn't have to be signed) you're compliant with the GPL but
haven't exposed the rest of your system.

~~~
polshaw
Thanks for the follow up! +1

>law

If it is a straight rip-off then the law should protect (at least in the
west), but if it were just _used_ (for learning or adapting from) then it
could be exceedingly hard to prove or even know about. I suppose what I would
be most worried about is if it were leaked such that anyone could use it on
any platform without paying. Who would buy an apple TV if you could run it off
your raspberry pi? (I know the analogy doesn't quite work-- aTV is decent
value as hardware-- but as a start up I will have higher costs so higher
prices).

>flash dump

This is why you encrypt the private data on your flash :) Decryption codes can
be stored in the processor (it's been a while since I looked at the system-
I'll have to look again, but it seemed solid). So that means they'd have to
either de-solder the RAM while somehow keeping it freezing cold too, or use an
electron microscope or something on the CPU. If they are that capable then I'm
sure they could just rewrite the code themselves without my 'help'. I'm not
sure how much security compilation would offer, and if the details of that
matter, that's something I should look into further. But the above seems
pretty solid AFAICT.

>samba in a chroot/jail/etc

Thanks! This is a great idea. IIRC it is possible to break out of a chroot,
but (IIRC again) not BSD jails.. so that could be a great option down the line
if I am able to use BSD. It adds a fair amount of complexity legally (although
it seems sound at first thought) and technically though (can they be hacked?),
so perhaps one for later.

