Since this was an accidental post (now removed) in advance of the actual announcement and release, we've buried this story. That way we can avoid treating the actual announcement as a duplicate when it happens.
So Microsoft is treating the Mac more seriously as a professional platform while Apple is treating it less seriously? I'm not saying this in a snarky way; I mean it literally as a change of corporate strategies in both companies. Microsoft seems to be saying, If you are a pro mainly using the Mac for professional work, we want to do a better job of empowering you, and Apple seems to be saying, If you are a pro mainly using the Mac for professional work, you need to get used to the idea that we are deemphasizing your market--no hard feelings.
If we're comparing Apples-to-Apples (seriously, pun not (originally) intended), Apple continues to rev on the XCode IDE every year adding more features to it. Most recently, they added a nifty visual memory debugger[1] and have taken another stab at device & certificate provisioning.
You're right, Apple is revving Xcode, but feature wise and bug wise, Xcode needs serious TLC especially on their Swift side. The regressions that are introduced every version don't help and tools like Interface builder, while seemingly helpful, usually make development within teams worse. I still use Xcode every day, but it defiantly needs hardening.
There goes good ol' anti Interface Builder rant again. The real problem is that Interface Builder is too easy to use while there's real depth and challenges to using it just like with doing everything in code.
- Don't throw all your screens into one file, you can even use one screen per file just like xibs
- Use code to style items and create controls you use more than once
- Render all the controls in code dynamically in interface builder so you won't end up with "ghost town" storyboards but everything is visible at a glance
Unless you're working at "Facebook scale" Interface Builder in the hands of an expert will get you very far.
Would you throw all your code in one big 8kloc controller? Of course not, but somehow people manage to cram every screen into the same .storyboard file, just because you can do it. Then they'll complain about merge conflicts, which isn't really a surprise given the fact that you're managing an 8kloc file.
Would you set the background, border, font and color of every button every time you use it in code? Of course not. You specialise a button class. But somehow people are selecting every button manually and setting those properties time and time again once they start using Storyboards while you can use a specialized class that will render in Interface Builder exactly like it will look in the app.
I've been using Xcode since iOS 2 and once Apple introduced IB for iOS, I was all for it. There are a bunch of challenges with IB that are outside of the solutions you proposed.
How do you fix these issues?
- Fixing misplaced views when transitioning from retina / non-retina screens or even different retina screen resolutions.
Reasoning: When I worked at Amazon this was extremely annoying - You didn't even have to touch the storyboard, you only had to open it and tons of misplaced views showed up. This causes a problem when working with any version control system because the XML changes are reflected in git even though nothing actually changed.
- Rendering Snapshots
Reasoning: I use snapshot tests to verify all views via unit tests. You can use IB to capture the view and load it programmatically, but you end up having to load the whole storyboard just to render one view controller.
- Setting all properties via IB
Reasoning: When I setup a button or a view, if half of my properties are in IB and half of my properties are in code, how do you determine what goes where. Apple did add IBDesignable, but wiring that up is so that you can click a drop down is more complicated than just setting the property on the object (and it renders in snapshots correctly and it never suffers from misplaced view and property configurations are in one place).
The teams that I've worked on aren't that big, but I can say that teams I've been a part of that don't use IB have worked a lot faster than IB teams. You may be a lot better at IB than I am, I only stopped using it 1 year ago for my projects after about 4 years of using it.
> Fixing misplaced views when transitioning from retina / non-retina screens or even different retina screen resolutions.
No solution. Non-retina users should carefully commit :(
> Rendering Snapshots
I don't think I understand the problem. Nothing is stopping you from rendering just one of the view controllers alone?
> Setting all properties via IB
I'm doing more and more in code nowadays, including constraints that are also rendered in IB. Makes it easier to change things like ratios or heights all across the app.
Interface Builder then glues it all together and I can throw in some one-off items.
What I want to know as somebody trying to learn C/C++, is Xcode good for somebody who is wanting to try their hand at bare metal stuff? I always hear how bad it is outside of app dev. but can it help me be productive compared to CLI environment or just using a text editor with nifty add-ons?
I've written a lot of C++ code in Xcode. Xcode is a pretty poor IDE in general but it works reasonably well for C++. IMO things like autocomplete and a graphical debugger are enough of a productivity boost that I wouldn't want to use a plain editor.
Yes, and the ability to find call references (who is calling this function, where is it used) from the drop-down menu is really helpful. I miss the ease of access to that sort of functionality in Visual Studio.
Yes it is there for me, but the functionality is different on Xcode as it will find all references when you open the menu, so you can see the results straight away. One less click (and perhaps one less popup hidden "find symbol results" pane in VS).
That was what I was highlighting, not the fact that Visual Studio somehow doesn't have that functionality.
I would recommend using QtCreator if you want to do just pure C++ work done. It has less focus on app development and it has way better highlighting (semantic highlighting thanks to clang), refactoring and debugging.
Instruments, included with Xcode, is a really good tool. I used instruments for profiling and finding memory leaks... I think it was great... except for the lack of reverse debugging which is pretty much necessary these days.
I have used both Xcode and Text editor & Terminal for C/C++ development. In my opinion both work equally well but I would advise you to use the Text editor and Terminal method because it means you need to learn how things like build systems, LLDB, Valgrind and more work. This knowledge is essential when the IDE does something Unexpected and you need to fix it.
> If you are a pro mainly using the Mac for professional work...
This is extremely dishonest assessment that only 'Developers' do professional work. Mac is used by Engineers, CAD/CAM, designers, illustrators, artists, musicians, etc and to reduce it only to developers is disingenuous.
I for one is happy with the new Mac and there are several people who've found it great for 'pro' use.
I'm running a Logic set up on my 2012 MBP, and I've never felt limited by its capabilities. I've been running Macs for music production for almost 20 years, and in my experience they have all at some point not had quite enough in the bag to let me do what I wanted of them. This is the first one (now fours years old) that has stayed ahead of my needs. I can run multiple copies of Massive, and rows of Waves plugins no problem.
I have hit limits with real-time 3D stuff, like gaming and Houdini, but music production is still well within its capabilities for me.
The people who make film/game soundtracks need a LOT of ram, and fast disks, because they're using terabytes of samples in each project. Also massive has pretty low requirements.
The main gripe musicians have with apple is the new OSX breaking their setup every year. The sandboxing in el capitan was particularly disastrous.
It sounds like you're basing this argument on hearsay and not personal experience (shocker.) I have various audio setups with different macs and interfaces and no OS update has broken anything with my setups since 10.6 era.
I don't think that's fair. Even major audio companies were warning customers not to upgrade to Sierra for months due to compatibility issues. Here's iZotope's page about compatibility with their audio plugins - if you bought it more than a couple of years ago, it won't work on Sierra:
Personally I had to spend a few hundred dollars upgrading my audio software that worked in Mountain Lion so it would run on El Capitan. I'm still running into a few glitches in places.
That page indicates what versions of those plugins are certified by the vendor to run on Sierra. That doesn't mean that older versions won't run, it just means they haven't been certified to do so.
I've been running software on Sierra that is only certified by the developer for Snow Leopard through Yosemite, and it still runs just fine for my needs. Apple puts a lot of effort into binary compatibility.
My personal experience disagrees with you. Waves, Native Instruments, and others, have had various issues over the years with OS X upgrades. I have projects spanning several years that forced me to wait a revision or two before upgrading to ensure they would continue to load and mix down as intended.
That's fair. I definitely have done a lot of sample-based and multi-track audio work and again have never hit that limit (on this MBP). However I do agree that if you rely on 3rd-party plugins there is a crapshoot every year as to how long you have to wait before compatibility reaches 100% and you can upgrade your system without breaking previous mixes or set ups.
How many developers need more capacity than the new MacBook Pro but still less than the largest laptops offer? The few non-gamers I know who are still CPU/GPU-limited are using clusters of machines because no single machine is large enough and in most cases it's significantly cheaper to rent capacity for the few times when they need it rather than pay up-front for something which will be idle a fair percent of the year.
1. People whose needs are satisfied by almost any laptop with an SSD and semi-recent CPU/GPU.
2. People who need the absolute top-end system but can fit their work on a single machine (e.g. if you had a model which uses 20GB of RAM, a 16GB machine is unsuitable but a 32GB machine is fine).
3. People who have so much data / computation that no normal computer can handle it.
My gut feeling is that #1 covers most of the market and the question is really how many people fall into group #2 but not group #3, especially in the context of laptops where the ceilings are smaller on both the Mac and PC side. I would further expect that a fair number of the people in the second group are not running those workloads 24x7 and thus have a practical, often cheaper, option of renting an hour of time on AWS/Google/Azure/etc. when they need to do something and get the results faster rather than leaving their laptop running for a day or two — even the best laptop GPUs are smaller capacity than what you can rent on a server.
"Web developers" should compile a browser(like Chromium) every so often, to provide themselves a reminder of the massive gap between what they do, and what "developers" do. Those CUDA cores and more CPU are very important to non-web-developers.
Pretty sure folks in the other professions mentioned are also lamenting over the missing successor to the Mac Pro, and the loss of not-yet-outdated ports on the MBP (especially since they probably have more peripheral devices than developers).
Sometimes for reasons beyond my understanding: I've seen many people bent over a 15" MB playing with MRI images in matlab/spm/... It's hard imagining something with worse posture, less ergonomics and less efficient.
Not sure why you get downvoted for what is imo an ok reply? Only Osirix is strictly OSX and even though I haven't used it I'm pretty sure there are viable linux/windows alternatives for that and the other software you mention. Anyway, next time I see someone in a situation as I pictured I'll just ask why they chose the Macbook
I'm a developer and plan to get the new MacBook Pro. I just don't use function or escape keys that much. So, it seems like it's a subset of developers that are bothered by this.
I used Windows when I was young, then switched to Linux for 10 years professionally, and finally I switched to the Mac about 5 years ago. I find it much easier to use day-to-day than Linux. Especially using multiple displays in different offices.
The "oh I can live with this" moment for me with the new mac was remapping caps-lock to escape. It's not as bad as I expected it to be(the new keyboard layout). Not as good as I wanted, but not as horrible as HN and other places warned.
That article is a typical example of "If it suits me, it must suit everybody." and your post is a good example for the no-true-Scotsman-fallacy of "Noone who _really_ used it dislikes it."
All I really know about this is how I feel about it and I must admit that I am going to go back to the PC world when the time comes to replace my current MBP. The offerings in the PC world are not perfect for me but they suit me better. I bought my MBP because at the time it was actually the cheapest machine offering all those features at a high build quality. I didn't get into a dependency on OSX and am pretty confident that I can just migrate fully to Linux. So I guess I'm not _really_ a professional Mac user.
That article is a typical example of "If it suits me, it must suit everybody."
99.9%+ of criticism of the new MBP has been of the form "Without ever interacting with one, I can tell it is unsuitable for me and therefore is unsuitable for anyone, anywhere, in any professional purpose, ever".
no-true-Scotsman-fallacy of "Noone who _really_ used it dislikes it."
More like "people are pre-emptively concluding, without ever having so much as been in the same room as a new MBP, that it is the antithesis of everything they need from a computer".
Which is, to put it bluntly, idiotic. I've suggested in the past that this feels less like "I have legitimate criticism of this product" and more like "I hate the manufacturer, always have hated and always will hate the manufacturer, and see this as a convenient cover for venting my hatred of the manufacturer". Notice how much of the criticism veers quickly away from specific aspects of the product and into "this is classic Apple", "this is how Apple treats users", "this is what's wrong with Apple", "Apple abandoning a key segment again", etc.
> 99.9%+ of criticism of the new MBP has been of the form "Without ever interacting with one, I can tell it is unsuitable for me and therefore is unsuitable for anyone, anywhere, in any professional purpose, ever".
While this some truth in it, it is not what I took away from the discussion. Most people complain about the following:
* "I expected more."
* "I expected the price for the same specs to drop or at least stay constant but not to rise."
* "I cannot use this machine to do the work in the way I do it currently. This and that port is missing."
While there is a great deal of hate towards Apple, the comments I've read here are not driven by hate but by disappointment. The old MBP lineup was very good for these people. They like them very much. Some love Apple, some don't care but they all agree that the old hardware is very solid for a reasonable price.
I really do think that the new MBPs are good machines for >90% of the current MBP users. A bit more expensive but not too unreasonable given that most are locked into the Apple ecosystem. Some may need some adapters for things like digital cameras, projectors, monitors, USB sticks, keyboards etc. but that doesn't really matter to a true Apple customer. Most of the time the machine is used without those devices.
What Apple should worry a bit about, though, is, that the top 1-3% users are now looking for other hardware. But who am I to worry about Apples strategy. Probably they don't need those few powerusers anyways in their future business model. I don't even hold Apple stock currently. My old MBP runs fine still and I'm not locked into their ecosystem. I can move onto the greener lawn at any time.
I don't care about the touch bar and while I'm miffed about the magsafe I can deal with shelling out extra for accidental damage insurance. Personally my disappointment is with the specs which no time with the machine will change.
The mac I'm currently using was purchased 3 years with 16GB of RAM and if I replace it I will be stuck with the same capacity. I imagine there are a lot of "Pro" market segments that are well served by 16GB or less though. I'm hoping the next revision gets a >16GB capacity and it's released before I need to replace this one.
Lately, my main machine has been a mid-2014 mbp with 16GB of RAM and I recently purchased the new mbp with 8GB of RAM.
Comparing them side by side is a very strange experience. _Technically_ it should be slower but it doesn't "feel" slower. However, at times, there is a bit of shutter that I can't put my finger on - am I just looking for an excuse to say its slow or was the loading time on opening this project always so slow?
In the end, I believe the 8GB variant is suitable for most folks while I myself will upgrade the the 16GB version. However, I've opened up plenty of large projects on the new laptop and have tested speed comparison of every day tasks: transcoding videos, opening million (maybe an over exaggeration) row excel docs and web dev work with 2 VMs running. Overall, performance is fairly up to par with my old machine.
All of that being said, I do believe the machine is a touch expensive. I ordered mine from Amazon when they had some ridiculous pricing and won out so I don't feel bad offloading on craigslist to get the 16GB version.
Thanks, that's a helpful review. It has me wondering if it might benefit from the hardware improvements enough that swapping would be noticeably faster.
I'm thinking along the same lines. If you're interested in other tests/simulations and I can run them side by side and report back. Today I've ran 2 VMs, about 20 tabs, a large project loaded into an IDE and two shell sessions without any noticeable slowdown.
I appreciate the offer, thanks. If you have a desktop VM (Windows or OS X) laying around I'd be interested in how responsive they are with enough of them running to make things interesting. They tend to be the most demanding/least tolerant VMs I have to deal with. But don't bother unless you're really into it.
My hypothesis was that the computer "runs" better because of faster RAM and processor but because the system has 8GB, it can't be pushed __too__ much.
Testing:
1x headless Linux w/ 1 CPU core and 512MB RAM
2x MS Windows 7 with display, 2/ 1CPU core and 512MB RAM each.
IDE consuming ~987MB RAM
6x Chrome Tabs open
3x Safari Tabs open
Apple Mail.app
Other misc software running in the background.
Each VM being added increased SWAP. With one headless, SWAP was at ~50MB. Adding WIN7 VM with display brought it up to 251MB, adding a third VM with display brought it to 550MB.
CPU Usage peaked at ~70% when adding VMs with some delays in response when browsing simultaneously.
All VMs running, mocking around a VM and running minor tasks in background (comprising and decompressing junk data) brought usage to about 30%, CPU usage never peaked past 50% without additional load.
Conclusion: I'm actually really happy with the laptop. The whole dongle hell really doesn't exist, in fact I was able to remove cables from my desk. Before, I had to plug in 1 power cable, one thunderbolt/DP, one USB...now all three are going into a single dongle and one cable to computer.
For external HD, I've been using the Samsung SSD USB3 to USB-C for about a year so that made life easier.
Prior, I had to carry an ethernet adapter for remote work, which was replaced by an ethernet adapter of a different kind.
General USBs, thumb drives, etc are plugged into my monitor (which has a hub) just as before, no difference.
If this laptop started at $200 less, I'd say this is a very adequate laptop for work purposes, including running VMs.
I've spent the last 4 years doing development from a 2012 macbook air with 8 gigs of ram. Its been totally fine, except when I've got a million chrome tabs open. (Declaring bankrupcy and closing them all at once feels great though.)
The article is worth reading. They (correctly) have kept asking "why does VS need more ram than that?" and just optimize the code when the footprint grows bigger.
And I'm genuinely confused by all these people complaining about 16 gigs of ram not being enough. If you have a laptop today with 16 gigs of ram, have a look. Do you actually run out of ram while working? (And if so, what on earth are you running?). It looks like its genuinely hard to fill 16 gigs without chrome or slack running. Look at all the stuff you can fit in that much ram: https://www.zdziarski.com/blog/?p=6355
I'm also a big fan of pushing app developers to fix their cruft. Maybe in 2016 its not ok to have apps that suck up as much ram as possible. Maybe app developers shouldn't write super inefficient software just because next year we'll have bigger computers anyway. Maybe if you're writing software (any kind of software) that really does need more than 16 gigs of ram to work effectively you should fix your shitty code instead of demanding everyone buy new computers. The atari 2600 had 128 bytes of RAM, and played all sorts of cool games. The original X-Box had 64MB of ram and ran Halo. Maybe its not apple's fault that your fancy 3d graphics program can't work properly in 'only' 16 gigabytes of ram. (Especially given there's 2 gigabytes/second of SSD bandwidth available on those new machines. Yummy!)
I love the fact that the new machines are small and portable. The hardware is more than capable of doing everything I need it to do. The only barrier to all day battery life now is crappy software.
Talking about VS for Windows, that's the maximum footprint of the VS main process. If you use one of the WP8/W10M/Android emulators, then you need an additional 1/2/3 GB for the VM (plus overhead). Throw in some browser tabs, git (in VS15 it will be in its own process instead of eating up the main process' memory), some .NET Native/LLVM, the OS itself and you'll find that having 16 GB will give you quite some comfort
I, too, miss the good old days of splitting bytes into nibbles. Sometimes I program a microcontroller just to feel the walls moving in.
Software today is designed the way it is because developer time matters more than hardware specs. Getting the software to run and onto market is much more important than memory footprint. Very few companies worry about hiring an assembly programmer to quench out 10% performance. They don't even use C/C++ because it's that irrelevant. Before those programmers even finish the market has moved on and the product is obsolete.
People want the RAM because it is cheap and they can put it to good use for their data mining, video editing, virtual machines or whatever.
I can't say I can really disagree with anything you've said. Good points all around. I mentioned how I use it elsewhere but I also mentioned that it may in part be a goldfish effect where I'm just being sloppy and using whatever is there and more.
Speaking of Chrome... I know of at least one browser that can avoid keeping every tab active and running but despite the cost, those tradeoffs Chrome makes lead to (in my opinion) a snappier experience.
On a related note, I know using Safari can dramatically improve battery life and may have some improved resource usage/performance characteristics but I've never been able get fully used to it without getting frustrated. It feels like death by a thousand cuts. For example, I can't tell if the dev tools are much worse or I'm just not understanding them the way I do Chrome's and Firefox's.
The ram you were using 3 years ago is a lot slower than the ram they have put into the new MacBook pros. So there is that benefit at least.
32 GB is a lot of ram. What do you do where you need that much ram in a laptop? Is that the limiting factor in performance for you versus another component? Have you considered a desktop/tablet combo or anything like that?
To be fair I don't bump up against it daily so this isn't some kind of deal breaker. Just disappointing.
Mostly virtualization (server and/or desktop operating systems) but sometimes decent sized datasets (which really aren't too bad from SSD) and occasionally those things combined with software that is... written poorly. I used to use both a desktop and laptop but it was more trouble than it was worth. I might have to revisit but I'll miss having those workloads local.
I have no doubt I'm an outlier and I certainly don't expect Apple to change anything. I'm not really resentful, just disappointed.
I was really just curious because for the longest time I got by with just 4gb on a MacBook Air until I switched to the MacBook and 8gb. I run VMs and other software on it but never really ran into a RAM issue.
I think there is hope though. They'll put 32gb of ram in eventually. The explanation was that 32 uses too much power, and of course the rebuttal is "stop making it so thin", which I sympathize with. On the other hand, I do like thin and light computers.
I'm sure part of the problem is I grow into my available RAM and storage like a goldfish. I'm sure some attention to optimization or constraints would make things less limiting but it's nice not having to think about it and just do stuff.
I like thin and light computers too. I have a MacBook Air I use to browse the web and I love it. Sometimes I pine after a career in web dev since it would handle it like a champ.
IIRC the explanation is really "this is what Intel's stuff is supporting, and we're stuck with what Intel supports", and it would've been potentially another year or more of no MBP refresh if they waited for Intel to get there.
They support > 16GB if you use DDR4. LPDDR3 is limited to 16GB.
Given that Apple maximises for long battery life/power efficiency/thin-ness (where heat = bad), I think they made the right decision.
There are very few people who genuinely need more than 16GB of RAM in a laptop computer.
Would I get 32GB if it was available? Yes. It would make my work a little easier (multiple VM environments), but it's hardly the end of the world on 16GB.
Yeah. I mean look, they could design a laptop for the vast majority of their users, or they could design one that is best for <1% if their users. It's a clear choice.
There are a lot of use-cases for >16GB of RAM, but I'll share mine specifically. I develop network appliances for a "medium-to-large" enterprise. Specifically, these network appliances provide BGP, stateful packet filtering, and the other network services provided by our company's products.
When working on these appliances, I tend to spawn hundreds(close to 1000, but not more on my laptop due to RAM) of VM's, and each VM has between 4 and 48 virtual networks. Then all the appliances begin working, advertising and responding to BGP updates, setting up and tearing down VPN tunnels, and other test scenarios.
Right now, when I want to do this for network spec'd above size <N>, I can't use my laptop. I end up having to provision hosts in one of our data centers just to get my work done. If my MBP had 20, 24, or 32GB(best!) of RAM, that wouldn't be the case. Maybe in 4 or 5 years, 32GB wont' be enough either, but right now I'm only concerned with the immediate. If the MBP's had grown in maximum memory(like other laptop vendor's models), this would have been a great improvement to my workflow, and allowed me to keep it local.
There are probably tons of more common use-cases for wanting all that RAM out there, but that one in particular is mine.
Yeah I can see that (though keep in mind that this is a very fringe case). But on the other hand I'm not complaining that my MacBook can't run any game available at 60 fps.
If I want to do that, I get a desktop. I think that has been a common theme for a long time. Power - desktop. Portability - laptop. We're asking Apple to make laptops as powerful as desktops. It just won't happen, unfortunately.
I interacted with one last week. That interaction confirmed everything I said earlier about the MBP (horrible keyboard, bad choices about ports, etc.).
I've been around the consumer tech industry a long time. I've shipped a considerable amount of consumer computing gear and am very familiar with the design process and the many tradeoffs that happen when you take something from a cool idea to heavy in someone's hands. One of the places I did this was Apple, in fact.
Apple seems to have decided to shift the market for the MBP by making tradeoffs that don't target any of the professionals I know.
I also know that (a) my predictions about the hardware were correct, and (b) none of the professionals I know plan to buy one, beyond the one or two samples we're getting into our group just to make sure we're making the right decision.
And now we're looking at Linux laptops in a serious way.
anecdotally i know that i and most of my friends would never buy one at this price point for just the basic model. but we'd be fine with it if our workplaces bought it for us and paid for all the expensive cords to make a multi monitor setup possible
> More like "people are pre-emptively concluding, without ever having so much as been in the same room as a new MBP, that it is the antithesis of everything they need from a computer".
Many of the complaints are about plain old hardware specs, especially RAM and the USB-C connectors. You don't need to hold a MacBook in your hands to understand how much RAM it has and how many dongles you'll need to buy. Same for complaints about the price.
I am in a similar situation... I bought my current MBP and will use it until it dies more than likely, but not sure if I'd buy another Mac. The touchpad on the MBP is second to none, which is what kept me this time but there's a massive premium there.
I use Mac, windows and Linux daily... And honestly the Mac is the most odd ui of the three for me. Having bash is pretty nice as is homebrew...
I'm really hoping MS puts similar effort into a Linux version, since I know I'm not the only one going that direction... Been considering it on my mbp...
I just bought a Magic Trackpad 2 and am using it with Windows 7. It requires the purchase of a $10 utility to get all the fancy stuff (scrolling, right click), but it's not bad.
Got a link to said utility? I'd love to have one at work (windows)... Currently using a mouse, the apple trackpad is so good though, only device I don't miss a mouse with.
Agreed, I love my Macbook Pro (2015 edition), however if I have to replace it in a few years I'm not buying the current Macbook Pro, instead I'll look for a comparable Windows laptop and put Linux on it. (If only there was a real Macbook competitor out there)
No matter what you think the specs say, the fact is the software and hardware are so well integrated
it tears strips off "superior spec'd" Windows counterparts in the real world.
This has always been true of Macs.
Having used both lines of machines for many years, if I want raw power and 'specs' I use the PC.
His argument falls short the second you want to use non optimized software - he even touches upon this, but "I understand people need to use programs from other developers, but at some point they need to play catch up". This is rather difficult to take seriously. So you can efficiently do what Apple say you can do, but nothing else? That might work in his line of work, but it doesn't for the rest of the world.
For his use case there is a pairing of software and hardware optimized to work with each other.
In "professional" situations this is not, as I understand it, particularly unusual. And yet the unsuitability of the new MBP for "professional" use cases has been widely assumed on HN. It's interesting to see someone who actually has one of those use cases and has actually used the new MBP, weighing in to say "it works, and here's why". Not least because the level of hardware/software cooperation Apple can muster is a selling point for him, but has been ignored by all the "I'm a touch typist whose workflow consists exclusively of function keys, the touch bar makes this a worthless toy to me" noise coming from HN.
While it is not unusual, it's not what Macbook Pro has been known for since their transition to Intel CPUs. So it fits some specific use cases, at the expense of every other. And if they did make the hardware faster, all usecases would benefit.
All those people with a different usecase, are right to be annoyed by that. But just as this piece completely lacks unbiased opinion, so does the noise coming from HN (and the tech community in general).
I bought a MBP in April this year. It was my first Mac. I am quite happy with it, the retina screen & touchpad are really good, but I don't think my next one will be a MBP.
Windows has now the Windows Linux Subsystem and it is actually quite good for Linux development on Windows. (I manually updated it to 16.04). I don't need Cygwin anymore and it compiles to ELF format. I don't know if Microsoft will keep it but if the goal is to attract developers who deploy on Linux, it might attract all the ones that cannot migrate to Linux due to proprietary apps or don't want to tweak their system.
I recently tried to use Linux (Ubuntu 16.04 and 16.10) as my primary desktop (once again I think I tried every year since year 2000) but still failed having 2 screens supported with 2 graphic cards, bluetooth headset being connected and sound through HDMI.
There's no killer app for me on the Mac (I don't use GarageBand or final cut pro), maybe I miss the viewer that is nice for pdf pages re-ordering or pdf merging (could not find free equivalent on Windows).
>I’m an editor at Trim Editing in London, where we cut high end commercials, music videos and films.
So not a programmer, and not someone who was using the function keys in the first place based on that article. Furthermore he claims it's "faster than editing on any windows system" because Final Cut Pro X is integrated so well with the hardware he doesn't need more memory or CPU. Sorry, of all the applications he could've chosen, claiming that a Windows box with more memory and a better CPU would be slower is... asinine. Fanboy alert.
> "A 'Professional' should be defined by the work they deliver and the value they bring, not their gear."
This is absolutely silly. A professional blacksmith can't work without a proper anvil. There really is something to be said about having proper tools for the job. You can't drive in a nail with a shoe, you'll need a legit hammer.
No matter what you think the specs say, the fact is the software and hardware are so well integrated it tears strips off “superior spec’d” Windows counterparts in the real world.
could the statement be any broader? what are the "superior spec'd windows counterparts in real world" he's compared it to? Also he's using Final Cut Pro which happens to be an Apple product so if that's faster due to integration how does that help anyone using non apple development software which I presume is majority of macbook pro users. I edit a lot of photos and I don't use any Apple software for it
As I understand, the new Macbook Pro has not been generally delivered yet. Is he has had one for a week, it must be through some special Apple program. So of course he likes it, otherwise he wouldn't have received one.
On one hand microsoft stepped up their attention to individual professionals (e.g. designers, developers, etc., more or less like Apple 15-20 years ago) and apple seems to forget them.
On the other hand, microsoft is infamous for privacy, for their forced upgrades, their updates, whereas apple emphasizes privacy.
I like macbooks as laptops, but on software front apple seems to rest on their laurels and developers seem to use macbooks, either because unix or there's more money in App store than elsewhere.
"Why would you buy a PC anymore?" - Tim Cook on the iPad Pro launch.
Don't be surprised if XCode for iPad Pro is a thing soon. It's becoming clear he sees iPads as the future of computing, it's all he uses personally and developing it's own apps is one of the last things it actually can't do.
iPad Pro using MOSH into a Linux box running tmux and vim. A great development environment for me - 11 hour battery life, built in LTE connection, huge screen that I can layout how I want, easy to carry around with me and I can use it for drawing and sketching UI concepts for my clients before I start on any work.
I don't use it 100% of the time (I also have a desktop machine) but it works extremely well and offers me things that a laptop cannot.
EDIT: It also doesn't have an Escape key - but I use Ctrl-C in VIM as I don't have to move from the home position then
> People aren't going to do development on a tablet or a smartphone anytime soon.
My set up proves you wrong: Nexus 7, bluetooth keyboard, a Debian chroot with git, vim, python and apt. I take that with me to sketchy places instead of my laptop. I think a 10-inch tablet could be more comfortable to read, but I love the Full-HD screen and pocketability of the 2nd-gen N7. I find it amazing that 3 years later and no other Android tablet has a screen resolution that's comparable.
It is, for learning basic Swift. For developing you want to use a setup that allows you to handle thousands if not hundreds of thousands of lines of code. A single-process touch interface isn't that, I think.
yeah, recent Microsoft attitude for development tools is excellent. They opensourced a lot of codes, released Visual Studio Code,and integrated Windows with bash shell. I assume Apple is becoming less dev-oriented company, seeing they are making light of pro tools like MacPro and function keys.
While I agree... MS DevDiv must be an interesting place to be right now. The Linux subsystem for Windows sucks so bad, and the windows containers for docker feels half baked as well... Use the msys bash that comes with git instead.
That said, I'm happy to see this and do hope to see a similar effort for Linux as many Mac users are starting to move on. This may well be an indication that the next VS on Windows may well be based on the MD base... Should they want to unify that.
I've been very happy with VS Code for my needs all the same. Can't recommend it enough for js/Node.
And those of us heavily dependent on UNIX tools just install homebrew and are fine as well. It's actually better than it ever was.
I have no idea what these self-appointed "pros" actually do. Their work seems to involve a awful lot of swapping hardware components, attaching peripherals in a jurisdiction that frowns upon adapters, and pressing ESC.
Do most developers really need anything with a CPU more powerful than an iPhone 5 to build apps or web pages? In most cases if they do they're doing it wrong.
Yes, in many cases. Building is very CPU and memory intense. Many times you have to run local severs to debug on.
I often find myself running local servers, building my apps, running my IDE, creating assets in Photoshop, and more all at once.
It helps. No reason to artificially limit processing power available. If I spend seconds more waiting for my Go command line program to compile, or Rails to boot up, or a complex web page to load, or my VMs to respond, or Elasticsearch to boot etc. etc. it all adds up. It's time that I can spend doing other stuff.
True, but we were all using most of that same technology when a core2 duo was a top end laptop, and it's not as though productivity has gone through the roof just because we can get an i7 laptop.
Mobile CPUs are approach mobile core 2 duo performance.
True those are slow. But I think most projects could compile a lot faster if the dependencies were compiled into a separate bundle once a day and the app code into its own bundle when it changed.
it's not needed if you're building a JavaScript app with Sublime text.
But compiled environments such as .Net take a lot of cpu and memory. It's not uncommon to have 2-4 visual studios open which can take up 10 gigs of ram. Couple that up with continuous build/unit test frameworks and 5-10 VS add-ons and you need a powerful machine to make it responsive
I'd argue that some of what you describe should probably be an anti-pattern, or is at least a byproduct of very fast, cheap CPU power. Emacs has worked for years on much lower-end hardware and has been used to build some very elaborate systems.
These days a typical developer laptop is 5x more powerful than the system the code will be used on (mobile device, virtual server, etc.)
I do not disagree with anything you said. The only reason for powerful developer workstations is that each second of delay is compounded when you use your machine as a tool. I do use lightweight tools but Visual Studio is still extremely powerful and not easily replaceable (I've been using it every day for past 10 years). Visual Studio is NOT fast. Especially when you add ReSharper into the mix. You could go on arguing about what is the best IDE and if ReSharper is necessary. The fact is that lots of people still use it and a powerful machine is needed today to make those responsive
That's quite an exaggerated presumption. Office for mac has been around for years. VSC will more than likely give sublime a run for its money as they are very similar on the Mac platform. Just because microsoft ports VS, it doesn't reveal anything other than giving developers that prefer Macs another IDE choice. You can get a MBP without function keys and just because Apple has decided evolve their approach to the keyboard, it doesn't mean anything beyond that. It's very interesting that this has become such a hot topic in the dev community. Apple isn't abandoning devs, pure and simple.
>WRONG. The removal of a dozen keys from an already gimped keyboard is decidedly anti-developer.
I am a developer and I could not care less about function keys.
Why the duck would developers need function keys? Even for Vim, the age-old advice is to remap Esc so that you keep your hands on the home row.
A flexible multi-touch strip of context-aware keys can do much more things -- e.g. map debugger step moves when I'm running an IDE, or trigger builds, show the SCM status of the current opened file, etc.
Still though, at the moment I can step through code without really thinking about it, partly because I know where the keys are based on how the keyboard feels, combined with the tactile feedback of pressing the buttons.
I would worry that with no physical presence on the keyboard, I would spend a lot more time looking at the keyboard figuring out where the function key I need is than actually getting things done.
I would be keen to see a review from somebody who uses the new Macbook Pro professionally as a developer to see if this is as much of an issue as I imagine it to be.
> I would worry that with no physical presence on the keyboard, I would spend a lot more time looking at the keyboard figuring out where the function key I need is than actually getting things done.
Exactly. In Eclipse, F5 steps into a function call, F6 steps over it, F7 returns from the current function, and F8 resumes execution, and mistakenly pressing the adjacent key can be frustrating (although, with time-travel debuggers, this issue might be alleviated?).
Interestingly, when Lenovo did Thinkpad T430 series, they removed the spacing between the F-keys (in order to fit ESC in the same row). Such a tiny change, yet how much usability it destroyed - suddenly it was impossible to use F-keys by touch.
It's the same argument from the Blackberry v iPhone days. Maybe not great for you. The question is - for most people are configurable F keys more useful than touch-typable F keys.
As a touch typist, can you please explain how function keys are particularly relevant to touch typing?
Stepping through the debugger is not typing, and function keys change role according to the selected app anyway. And when they are system function keys (brightness, volume, etc) they are even more irrelevant to typing and/or touch typing.
Besides, there's nothing particularly hard about finding a touch based F6 key compared to a physical F6 key. A key's position (which won't change) gives more of a clue than the key's boundaries.
Heck, it's called touch typing -- a touch strip doesn't sound that alien to it.
You can feel the boundaries of physical keys, unlike virtual ones on a touchscreen. The nubs on F and J serve a similar aligning purpose as, and enhance the functionality of, the interkey gaps on the function key row.
Stepping through the debugger is not typing,
I disagree. E.g. when you're deciding to step in vs. step past vs. step out vs. run etc.; finding the right key is extremely important.
...and I challenge anyone to hover their fingers over the respective keys continuously for hours without touching them, losing their alignment, or unnecessarily tiring the muscles of their hand.
Thanks, you answered the question perfectly. I never look at the keyboard whilst typing normally. I have transitioned to using Visual Studio 2015 in the last year, and am now also developing my muscle memory of the function keys.
I use a Logitech G910 keyboard (love those Romer G switches now, even though it took a while), and e.g. setting a break point with F9 is easy as it's the first key of the third block. For a debugging session, the rest follow - I can rest my fingers on the buttons and just step through / skip over etc - no need to look at all, the focus staying on the code.
Actually knowing when I press the button too is extremely important; there's no mistaking the action on a physical keyboard.
I also have a X1 Carbon laptop, the first gen. Fantastic keyboard (and thanks to getting the i7/8/256 version which was rather outstanding back then, it still serves me well today even though the 8gb is getting limiting). In its 2nd iteration, they went for capacitive function keys, much to just about everyone's chagrin. Thankfully, Lenovo listened to feedback and in its the 3rd generation the function keys are back to normal, i.e. same as mine. If they bring a 32gb model out by the time I feel I need to upgrade, I'll probably look at another one (in 1-2 years).
There are many applications, that do use F-keys for shortcuts.
Not only debuggers, like others mentioned. But also some popular file managers (windows: Far, Total Commander; linux and osx: Midnight Commander).
When using these applications, I can copy files using F5 - and I know it is F5 without looking, because it is in the middle and has an empty space to the left. Similarly with F8 (delete) - in the right region, has space to the right.
With touch strip, you pretty much have to look away from the screen, onto the strip.
>When using these applications, I can copy files using F5 - and I know it is F5 without looking, because it is in the middle and has an empty space to the left. Similarly with F8 (delete) - in the right region, has space to the right.
Well, no such space on the physical keyboard I'm using now. Not after F5, and not after F8.
Yes, there are such keyboards, from the popular ones Thinkpad [TX][245]30 series for example and the previous rMBP too. However, most keyboards do have the spacing.
Actually Xcode has been updated for touchbar, and has some useful code editing commands. But does not show any debugging commands in the touchbar. Perhaps it will in future.
It is possible to show F keys by holding the fn key and you can configure the touchbar to always show F keys by default.
Are you kidding me? How can you not see that this is nothing more than an arbitrary habit that you've grown comfortable with? Imagine the function keys had never existed to begin with. Don't you think we would have come up with a different way of stepping through code with a debugger? I understand that it's annoying to have to change your habits, but you're a developer for crying out loud: Your job is literally to change how other people do work, to make it more efficient, easier to learn and so on. We all know that our users often resent us, because we change how they have to do their work. But we do it anyway, because we believe deeply (and mostly rightly) that the benefits of progress outweighs the short term annoyances of having to change habits. But when we're the ones who have to change, hoo-boy, suddenly the sky is falling. Give me a break.
Well...pretty much everything you do is an arbitrary habit that you've grown comfortable with.
Sleeping in a bed is an arbitrary habit that you've grown comfortable with, why not sleep on the floor, or in the bath?
The fact is that when developing and debugging, the function keys represent the most efficient way of stepping through code, and a part of this is to do with their physical presence on the keyboard.
I know that I don't have to use them, there are other ways to achieve the same thing, but those things have always been there and I choose the function keys because they are the best option.
Sleeping in a bed is an arbitrary habit that you've grown comfortable with, why not sleep on the floor, or in the bath?
Because contrary to what you say, sleeping in a bed is not just an arbitrary habit. The bed is a special purpose piece of furniture, optimised for sleeping in. The use of f-key to step through a program is OTOH simply an accident of history. The f-keys were chosen because they were there. Had they not been, some other solution would have been invented, using the keys that were there, and (this is my point) the solution would have been just as good!
I must admit, I rarely use the function keys as they seem to be different everywhere... Mainly from years of VS usage, actually...
But I use the escape key hundreds of times a day.. not having that as a dedicated key will hinder me as much as removing the backspace key.. I mean nobody needs to go back, and you can just use the mouse with cut..
> The fact is that when developing and debugging, the function keys represent the most efficient way of stepping through code, and a part of this is to do with their physical presence on the keyboard
> an arbitrary habit that you've grown comfortable with
You mean a workflow?
Who are you (or anyone else) to decide what works best for me? Am I not capable of making my own decisions? Do I really need a hardware company making those choices for me?
>Who are you (or anyone else) to decide what works best for me? Am I not capable of making my own decisions?
Most people aren't -- from politics to personal finances and relationships, there are tons of bad decisions everywhere one looks. (Including my decision to answer this comment some would say -- heh).
We have schools, best practices, guidelines, standards etc, to try to enforce some good decisions upon people.
That said, if one feels strongly about it, there's always the decision NOT to buy such a laptop.
>And many people are making that decision, so what seems to be your problem with this?
No problem with this.
My problem is that they frame it as if their personal habits/users are universal, and a computer that doesn't cater to these is inherently bad (as opposed to just bad for them).
Who are you (or anyone else) to decide what works best for me? Am I not capable of making my own decisions? Do I really need a hardware company making those choices for me?
Did you not read beyond my first sentence?
Who are we as programmers to decide what works best for our users? Were the clerks at the bank not capable of deciding for themselves if their pen and paper workflows worked better for them than the computer programs we made to replace them? The typographers of yore were almost certainly more comfortable and faster using a linotype machine than this new fangled desktop publishing software, that we invented. I simply cannot wrap my head around people in our profession who kick and scream because the march of progress once in a while makes their lifes a tiny bit uncomfortable for a short while.
> Who are we as programmers to decide what works best for our users?
Where did I say that? You seem confused.
And your argument regarding publishing and banking software is a strawman intended to shift the focus away from the real argument - that of choice. Forcing a change on my workflow can have very real effects on my ability to generate income. Why should anyone be ok with that?
> Who are we as programmers to decide what works best for our users?
Where did I say that? You seem confused.
I am saying that we as programmers force people to change their habits all the time. We do it to in the name of efficiency and progress. We eliminate workflows, we make entire jobs redundant. We of all people should be able to recognise that even though change is uncomfortable, it is inevitable, and mostly for the better.
And your argument regarding publishing and banking software is a strawman intended to shift the focus away from the real argument - that of choice
Please. Even if we pretend that you don't still have the option to use a third party keyboard, or buy one of the Macs that still have the f-keys, what about the people who would prefer the new touch bar to the f-keys? What about their choice?
Forcing a change on my workflow can have very real effects on my ability to generate income
I'm sorry, but that's ridiculous. You are not going to feel a very real effect on your ability to generate an income simply by being forced to learn a different set of shortcut keys to step through a debugger.
Well, in this case, it's not deprecated. It's deprecated by one computer manufacturer. There are more than enough other computer companies still willing to sell you a keyboard layout like the one you've been used to for the last 30+ years.
The problem is, say you're an iOS developer, you get no choice; you HAVE to run a Mac and be at Apple's mercy.
Firstly, it's not true that you don't get a choice. Apple still makes laptops with f-keys. And you can always plug in a 3rd party keyboard.
Secondly, an more importantly, of all the options Apple don't give you (and there are an infinite amount of them), this one is so minor. Why, other this is how you are used to it, are the important reasons for using specifically the f-keys to step through a debugger? What is wrong with any of the other keys?
I agree of course that change merely for the sake of change is not a good idea, but surely, surely we can all recognise that Apple did not make this change on whim, simply to try something different?
You can use Visual Studio on Windows to write iOS apps with Xamarin or Cordova, using a network-attached Mac solely as build server, without ever having to use it (except for updating stuff, via VNC)
It kind of is... this reminds me very much of moving from a blackberry to an iPhone. Sure, the features of the iPhone were great, but for my (at the time) primary purpose of using the device for sending emails, it as a MASSIVE step backwards. I could touch type nearly as fast on a blackberry as on a regular keyboard. Moving to a touchscreen meant I had to look at the screen while typing. It slowed me down tremendously, and IMO was a massive step backwards in usability.
It's an interesting comparison, but IMO not that apt. Going from a physical keyboard to a touch screen, something is definitely lost (even though much is also gained). The removal of the function keys will at worse force people to memorise different hot keys (any reason why the number row could not serve the same purpose exactly as well?), and at best it will make providers of IDEs and other productivity software revisit old assumptions, and improve the usability of their software.
They aren't just function keys. Losing the tactile feedback of volume and screen brightness, while not THE END OF THE WORLD will very quickly become an every day annoyance for me. In order to... have a contextual touchscreen that forces me to take my eyes of the monitor to use?
Sorry, I'm just not seeing the draw. I'm also struggling to buy into the "not everyone is a touch typist" excuse. Anyone under the age of about 40 has had a typing class. Anyone under the age of about 25 (who is using a computer for their job/attending college) knew they were going to spend the rest of their life using a computer and probably paid attention.
>They aren't just function keys. Losing the tactile feedback of volume and screen brightness, while not THE END OF THE WORLD will very quickly become an every day annoyance for me. In order to... have a contextual touchscreen that forces me to take my eyes of the monitor to use?
Most people "take their eyes of the monitor to use" the function keys. Given this, the function row strip will finally be more usable and more obvious for lots of other uses besides volume and brightness (things that people at best adjust a dozen of times a day).
Most laptop users are neither programmers not touch typists that use the function row 2000 times a day. Nor does being a "pro" users means you are either of them. A graphic designer might not be a touch typist or care for the f row, but he is a professional. Same for a doctor, an architect, a musician, a videographer, a CEO, an accountant, etc etc...
"Sorry, but I'm just not interested in this motorized wagon you've invented. It's noisy and ugly. Could you please just go and invent me a faster horse instead?"
So you're comparing a touch bar, which has already been done before and met with the exact same pushback, to a radical new form of transportation? Fanboy alert.
So, a product that went from non-existent to being the #1 sold item in it's niche, the #2 in overall watch sales, and outsold competitor smartwatches 10 to 1, becoming a multi-billion dollar thing?
And all that in it's first 2-3 years (it took more for the iPod to become ubiquitous from its 2001 introduction), and while not of course being expected to become the next iPhone anyway...
No, I am making the observation that most users are conservative and resist change, even for the better. And that most people can't recognize progress even if it hits them in the face.
Do modifier keys (control, alt, command, fn, shift) fall under your definition of modal UI? If so, how do you deal with the fact that your keyboard doesn't (I'm guessing) have dedicated keys for cut, copy, and paste? I think its safe to assume that everybody uses those significantly more often than they step through a program with a debugger.
If you're using a MBP at the moment - then you're already pressing the fn key to use the functions keys for anything other than the media-stuff they've been designed around for years. Your workflow is exactly the same - you're just using soft keys instead of physical ones.
Or change the preferences so the function keys work as expected and you have to use the fn key to access the media options. I probably use the function keys 100x more often than the media ones.
So the touchbar has a setting where it behaves precisely like the current MBP? Esc, row of 12 keys, which are either f1-f12 or media with a setting, and are toggled between with fn? That's strictly worse if I only ever use it like that because it's the same but with soft keys, but sure that's basically the same.
The debug step keys will show up in the touch bar? If they are not stupid enough to swap them around sometimes, then they should stay in the same position.
Use the strip bar in debugger keys mode, or function keys mode for non-updated apps?
Or use printf statements (seriously -- I never advocated for much debugger use, unless it concerns very focused stepping. A lot of people step all around and examine everything and anything for ages with no clear idea of what the bug might be).
More than a few developers have spoken up on the issue here on HN. The escape key is commonly used, as are many other function keys. Other vendors have rolled out similar gadgets to apple and they've been rejected by the development community at large - which apple would have known because they don't implement things like that willy nilly.
Since we're talking about what is basically a rebrand of Xamarin Studio, the current release of Xamarin Studio for Mac uses the function keys for stepping in the debugger
Well, now they can use the touch bar to show those functions depending on context. Isn't that better overall for usability rather than having to remember keybinds for every action?
I usually don't have to _remember_ keybindings, I use them out of muscle memory and feeling the actual keys with my fingers, using the larger cracks in desktop keyboard between the function keys to find the right way quickly without averting my eyes from the screen. No touch bar can replace that kind of efficiency.
If I've got to constantly look between the screen and the keyboard to make sure my finger hasn't wandered off "step into" and onto "continue", then that feels much worse for usability.
It seems like it would be better in the short term (don't have to learn the bindings), but worse in the long term if it's something you do often (no way to find the keys' edges without looking every time you press the key, with the occasional glance down again to correct for drift).
Whether that's "better overall" or not depends on individual usage patterns. Personally, I don't really stray from the alphanumeric part of the keyboard often, and I can rebind esc, so the Touchbar's mostly a moot point for me. My strongest reason to dislike it is that it adds yet another set of potential points of failure for the machine without providing (for my purposes) much benefit.
Apple tinkering with the function keys has allowed the development community to recongratulate themselves with IDE it seems.
Seems also that people confidently touch type debugger instruction, despite the key that you want being literally squeezed between 2 keys that will ruin you debugging session and make you lose the next 10 minutes.
I think the developer community at large is more prone to overreaction.
Note that I think people are going to be impacted. I think the touchbar is a downgrade for people that needs to use Windows either in the VM or Bootcamp. Some IDE are especially F-keys happy like Eclipse will be slightly worst of.
I still type on an old MBP with physical F-keys and they are actually awful for touch typing. They are smaller, evenly spaced (no grouped) and not aligned with the lower key rows. If you are serious about touch typing potentially workflow ruining key combination like debugger ones, you must use an external keyboard already.
In Terminal, I have mapped common commands that I use all the time into macros assigned to function keys. For my Ruby workflows, I have various rake commands a single key press away. For when SSHed into Linux servers I have stuff like "sudo apt-get update && sudo apt-get upgrade" ready in a split instant. I tell you, it really is a party trick at work.
I don't know if it's just the usual whining of people complaining about new things.
From the exterior (I'm too young and the only Mac I've ever used was the Macintosh Plus -bragging, with 4Mo of RAM- while I was a child), it really seems like Apple has dropped the ball for professionals, both developers and graphics people.
There are now equally well designed PCs (laptops and desktops) from other manufacturers, Windows has a lot of support, if you want to use Linux, the kernel now supports a wide range of hardware.
In the meantime, Apple doesn't have a good desktop offering, and their laptops seem gimmicky to me while not offering a performance edge over their competitors.
A few months ago I thought about buying a (my first) Macbook and waited for their announcement, now I'm looking the other way.
And around me, I'm the guy people go to ask when they have a computer related purchase to make.
I don't now where you lived the past 20 years but theses are maybe the most classical arguments against mac in business.
Mac have never been "for Pro" if you go that way. So I guess it confirm that it's not the company that shifted it's focus, but maybe there is more potentials customers expectation of Apple going more "pro".
Yes, Apple has never targeted businesses, but they did target individuals professionnals at some point. Video and sound editors used to work on Macs, and now move to windows. It's a classic, but is it wrong?
Since popular HN submissions will often hug the site to death, a nice feature would be to automatically check the top 3 caches (archive.org, Coral, Google) immediately on submission, before the page goes live on HN. If the cache doesn't 404, the content could be quickly parsed to check it matches the submission and, if so, automatically include the cache link at the top of the submission page.
This would save people manually posting these all the time and would, in many cases prevent the case where it becomes impossible to retrieve a cache, because nobody thought to access one before the slashdot effect occurs. Or, as in this case it seems, the article is pulled.
It would also be nice if, 24-48 hours after submission, the only cache link remaining is archive.org (if they have the page), so the content is retained permanently as-submitted. It's rare, but sometimes a page will be updated so the comments no longer make sense.
It would also be nice to include a link history in the same area (have requested this before), in case the original submission is changed by the mod. Usually when this happens the notice is the top comment, but sometimes it isn't and the discussion can be quite confusing as a result.
Or, they could just have a live link AND a cache link and let people click what they want. A story with a cache link is likely to be upvoted more, encouraging people to do it.
I wasn't suggesting only displaying the cache link, but rather providing a list of alternate links (if available - not all sites allow archival, or the cache might be stale) so they are always there at the top of the page.
It's not exactly clear what is best netiquette regarding linking because larger sites that rely on advertising and can handle the traffic will welcome more clicks, but smaller sites would rather be cached. Better for HN not to encourage or discourage either, but give both options and let the readers decide.
Ah OK, it was probably under embargo and got posted early by mistake. I've been to Connect(); before in a time zone ahead of the US and nothing gets announced in the morning until the US wakes up and the embargo is lifted.
So as for now it is the rebranded and polished Xamarin Studio - hopefully, they have improved it's usability as in the past it was pretty lacking compared to VS 2013/2015
As I understand it, Visual Studio for Mac is simply a re-branded Xamarin Studio and will continue to be. It includes the improvements they had planned for the next release but I doubt they will do a rewrite.
I actually prefer Xamarin Studio over Visual Studio (on Windows) in some respects. For example, the Xamarin.Forms XAML previewer is much better. Looking forward to a full designer.
Visual Studio for Mac is a re-brand of Xamarin Studio. However, perhaps more importantly than the software itself, opening the Visual Studio brand to the mac environment signals Microsoft's commitment, yet again, to meeting developers in their chosen environment. Xamarin Studio is also based on MonoDevelop, so maybe this could accelerate a VS linux port, if they're so inclined.
I just started looking into Xamarin Forms and couldn't figure if it was just blindly typing xaml or there was a way to preview it. Thanks for the posts.
Yeah, for now you have to type the XAML but a designer is coming. There is a preview of a previewer but I couldn't get the one in VS to do anything. The XS previewer did work for me though. Make sure Android Studio / ADK is updated if only iOS is working.
You should, Microsoft will probably release an MS Linux distro soon. (Besides the Debian or whatever they now have for Azure users, I mean.)
MS is going all Oracle-y on us, withdrawing from the consumer market as fast as they can, and heading for the green pastures of the corporate market, where fat happy companies are there to be milked. They are sick and tired of fighting for consumer money one penny at a time.
(XBox is probably not long for this world either.)
.NET Core is for running apps on Linux. For a large chunk of the server market, that is plainly a requirement these days.
But most people writing the apps which then run on Linux, develop them on Macs (and the point of WSL is to give them a reason to consider Windows for this).
+1 for Project Rider. The EAP is pretty good at this point, it supports debugging for .NET Core and decompiles dependencies when using Go To Definition.
It still has a few bugs and hangs up on me once or twice a day. I think they should have those ironed out soon though.
To me it looks like they are giving people running linux the minimum tools to run that stuff, but at the same time not giving enough to encourage devs to stay on Linux (WSL goes in the opposite direction, its purpose is to help devs move to Windows). But this is all IMHO, take it with a grain of salt.
By the way, since it's based off MonoDevelop, you can just use it con Linux.
I've been using Visual Studio Code on Linux to do some TypeScript development. It's actually pretty nice, and works really well in a general JavaScript environment (npm, etc).
We are waiting since years for a simple xamarin studio for linux, even just supporting only ubuntu and letting the other distro to manage the packages for them... The reality is it seems they don't want at all to bring Xamarin on Linux, sadly Miguel was very clear about it in the years.
I'm glad to see more Microsoft dev tools on other platforms but don't lose sight of why this is happening. Microsoft is shifting their business to the cloud. They make their money off Azure and other services. In other words, they are making their money mainly off of developers now and its in their best interest to get on the good side of devs which is why they suddenly have a vested interest in open sourcing tools and helping Mac/Linux. Given the love and lavish praise I see heaped on Microsoft in every thread they do something it's clearly working. I'm not saying don't praise them when they do something good but don't be deceived into thinking they are doing it out of good faith.
> don't lose sight of why this is happening. Microsoft is shifting their business to the cloud.
Why is this a bad thing? Microsoft is a company, they exist to make money. A huge new market for them is 'Cloud' and they're doing everything they can to make that as appealing as possible.
If you want to be the tallest tree in the forest, you can take in the most water and sunlight to be taller than everyone else, or you can chop down all the other trees so you're the only one standing.
Back when they made money from selling operating systems, they were definitely doing some tree chopping with some of their practices.
No business operates out of "good faith". For a long time, people treated Google like it was a nonprofit with all of the talk about making the world a better place, rather than recognizing it as, first and foremost, an ad company. As you point out, making people like your company is just good business, and successful businesses do that.
What's the long-term business goal for Azure lock-in? If .Net runs as well on Linux as it does on Windows, then there really is no reason to use Azure over any other cloud provider like AWS where generic CentOS or Ubuntu boxes are no different than their Azure counterparts.
Back when .Net was Windows only, they gave it away because the goal was the developers would pay a lot of money for MSDN, GUI apps on Windows, SQL Server, Office, and Sharepoint integration, etc. But .Net core is mostly server side, so I'm having trouble figuring why they'd bother giving away VS to Mac users without being forced to run on Azure in production.
> so I'm having trouble figuring why they'd bother giving away VS to Mac users without being forced to run on Azure in production.
It's the trickle up theory of modern development. Get folks using your languages and tools, and if you make it easy to integrate with your cloud services in those tools, that's where the revenue comes in.
It's probably an interesting number how much revenue is generated by various cloud providers for folks who just forgot to shut down their VM
Because AWS treats the cloud as infrastructure. Microsoft treats the cloud as infrastructure + software & services and they have superior software. Using their tools means exposure to their software which means that a lot of people will chose paying for other software.
"""For the functional programmers among you, it includes excellent F# support, powered by the same F# compiler used in Visual Studio."""
I've heard that F# is great from multiple people I trust a lot (and a quick cross check showed it does indeed look very cool) so I might give it a try once this is released.
I do some C# development (Unity Engine stuff) on my Powerbook so this is also good news (MonoDevelop is fine but I'll obviously test VS for Mac)
I started converting some c# code to f# as a learning exercise about a month ago. After a few fits and starts, something clicked and I am amazed at how much I like it. I have been following the articles on "F# for fun and profit" and highly recommend it.
OCaml guys don't seem to like F#, as its implementation differs from many in the FP world believe is "correct", but any form of FP on .NET is a good thing, I think.
Interesting, but I can also see some risks with their approach. Now they have three different Visual Studios with completely different technology stacks.
The shell still has a lot of C++ in it, yes (but also a lot of C#).
But the shell is a relatively small part of the entire IDE - the bulk of it is the extensions that actually provide support for various languages and technologies. Those also have some C++, either as legacy code, or because the team just preferred it, but by now C# is definitely the majority of it, and it keeps trending in that direction.
A lot of it can be very non-obvious and defying common sense. For example, if you take VS 2013 (pre-Roslyn), the C++ project system was written entirely in C#, while the C# project system was written mostly in C++.
Hmm I have been googling for the last 15 minutes to find the difference between visual studio and visual studio code and cant seem to find a concrete answer. I thought yours was actually a hint at what it is aimed for but when you go to:
Visual Studio and Visual Studio for Mac are IDEs written in C# and they support C#, C++, VB.Net and many other languages.
Visual Studio Code is a text editor built in TypeScript and based on Electron. It supports a broad range of languages, but it's an advanced text editor, not a full fledged IDE
Visual Studio Code is more like a fancy text editor rather an a full IDE. It's more like Sublime Text or (technically) Atom, rather than an IDE like Visual Studio for Windows or XCode.
Do YOU know the difference? Would it hurt to be more concrete? I'm going to venture a guess based on my assumptions, which are probably wrong, but then we can at least start a discussion:
-VS Code doesn't support solution-wide refactoring like renaming classes or moving a method from one class to another
-VS Code doesn't support runtime debugging (breakpoints)
-VS Code doesn't have a visual editor
-VS Code doesn't have memory or performance profiling tools
-VS Code doesn't have source control integration
-VS Code doesn't have an integrated build tool (MSBuild)
> VS Code doesn't support solution-wide refactoring like renaming classes or moving a method from one class to another
F2 "Rename Symbol" (works on Go with gorename). Can't imagine using Roslyn doesn't (or won't soon enough) allow the same. From the readme:
Great C# editing support, including Syntax Highlighting, IntelliSense, Go to Definition, Find All References, etc.
> VS Code doesn't support runtime debugging (breakpoints)
It does. CMD+Shift+D goes to the Debug sidebar. CMD+Shift+P Debug shows a bunch of commands. From the readme:
Debugging support for .NET Core (CoreCLR). NOTE: Mono and Desktop CLR debugging is not supported.
> VS Code doesn't have source control integration
CMD+Shift+P git whatevs is used daily here as well as Ctrl+Shift+G for the git sidebar that can show (editable) diffs. Both the gutter and scroll bar are annotated with git info.
> VS Code doesn't have an integrated build tool (MSBuild)
MSBuild comes with .Net Core and is (probably) invoked with the language-agnostic CMD+Shift+B (run build task). C# ext may have more.
> VS Code doesn't have NPM or Nuget integration
There are both NPM and NuGet extensions available.
Now of course if what you want is all of this nicely packaged and wrapped in a GUI, well, obviously VS Code is not an IDE (as VS is) but that's precisely its value proposition.
I stands for "Integrated" in IDE. Project generation, code inspection, source control, database access, refactoring, debugging, test running, etc are integrated.
Visual Studio Code generates project via third party tool - dotnet new. Refactoring, code completion is done via third party tool - Roslyn (?). Etc. It's an editor with plugins. Unlike Visual Studio or Jetbrains Rider.
It would not hurt, but it would be an arduous and unnecessary task. Most people know the difference, and if you do not, you can read the Wikipedia article on IDE. There is no need to have a discussion on the differences, it is not remotely interesting or fruitful.
I guess what annoyed me was that after thebeardedone said he'd googled for 15 minutes trying to figure out the difference between VS Code and VS, 4 answers to his post gave absolutely no clarification as to what the differences were, including yours. If you look up the definition of an IDE on Wikipedia, VS Code falls into that category.
Visual Studio Code is less than a full featured IDE, and more of a very advanced code editor with told integration... The ui is much more minimalist in VS Code than full VS, Eclipse etc.
It's not that it doesn't do the job, it's just a different workflow.
As it stands, VS Code is probably the best tool for JS/Node projects or today imho. And from what I've seen, one of the better options with plugins for go and rust.
It's also available for Windows, Mac and Linux... Much faster/lighter than others, including Atim.
Lot of overlap, but VS Code has a much bigger ecosystem/community. Xamarin/Visual Studio has advanced debugging features like thread inspection, "immediate" console for executing commands while the program execution is paused, profiler, App Store submission, and stuff like T4 templates, etc.
But if you're writing C# in any of them, you're using the Roslyn compiler and its attendant services, so you get the same IntelliSense, you can share the same project files, etc.
Most of the UI was rewritten for VS2010 with WPF (XAML, DirectX acceleration), but lower/deeper parts are written in c++ and this makes porting "probelmatic" (MFC, win32.dll, kernel.dll to posix stuff).
MonoDevelop ~ Xamarin.Studio is written in C# (some plugins in F#) and this makes porting easier. That is the reason there was MonoDevelop version on Windows.
This is nice of course, but without C++ support it has very little appeal to me.
I don't want to learn C# to write iOS apps. I might learn it just for fun, but I will continue writing the iOS apps with Swift/Objective-C and C++.
C++ support is the weak spot of Xcode and so far I haven't found a suitable IDE for C++, except maybe Qt Creator and several IntelliJ-based IDEs, which are ok but not on par with Visual Studio on Windows.
I keep a windows machine around mainly for writing C++ code (and games!).
Because the officially supported language for iOS development is Swift (and ObjC) and Xcode is quite good for iOS development.
I also like Swift a lot. It's a modern language with lots of powerful features and also very practical.
My current project involves a lot of context switching between Swift, C++ and Objective-C++ and I miss a lot of Swift's features in C++.. For some obscure reason I also enjoy Objective-C++ quite a bit, it's a weird mixture of insanity but it feels good ;).
I don't know a lot about C#, I think it must be a good language, albeit older with more historical baggage, but even so I wouldn't use it instead of Swift for iOS development.
I mean, why ?
For Android development - maybe - I'm not too fond of Java, but on iOS/Mac - I will stick with Swift thank you.
This is great news! I was a c# developer for many years and absolutely loved it. But then I gave up the Microsoft ecosystem 7 years back just because it felt like a lock-in and also a bit backward compared to non-MS tech.
But in the past one year, Microsoft has got me in again:
a. Moved to TypeScript from JavaScript (including my hobby projects)
b. Moved to VSCode from Sublime
c. C# is a great language and I just hate Java. Hope this and more steps make it easy to use C# and deploy in non-MS environments.
C# is a lot different today than it was 7 years ago. You might be in for a shock. The nice thing is you can develop it today as you did 7 years ago, but with heavy use of lambdas and var, it reads almost like another language.
7 years ago was 2009. var, lambdas, and linq were already in pretty heavy use by then. The biggest change since then is the dynamic language runtime, but I don't really see that used much in c#.
>At its heart, Visual Studio for Mac is a macOS counterpart of the Windows version of Visual Studio.
Does it have a monolithic install and update process that's essentially a slow, bloated black box? That's my main turn-off with Visual Studio on Windows, and even more so with Windows itself.
As soon as Microsoft figures out efficient installation and updates via CLI without the need to reboot, they'll dominate the developer space (and perhaps server market, where reboots are even more problematic).
At a presentation at NDC or something Scott Hanselman did a bit where he pretended that he'd forgotten to install VS. Then he proceeded to install it live in the presentation, in less than a minute I think.
The statement "macOS counterpart of the Windows version of Visual Studio" is essentially false as far as I can see, the core of VS for mac is Xamarin Studio which is not Visual Studio.
Visual Studio for windows has two different "cores": the massive old C++/COM-based application used since forever, and the newer subsystems Such as The language subsystem (For C# and VB) Roslyn, which is .NET based.
Xamarin Studio could be using Roslyn, but the app isn't based on the same platform as VS itself is. Since it is at it's core a newer and much smaller application, it shouldn't have to have any of the setup issues that VS has (just like VS code doesn't).
It is based on Xamarin Studio. I work with XS on OSX last 3 years. Unfortunatelly, Xamarin Studio has a lot of issues with performance, refactoring, it crashes from time to time.
I think it will be better to wait for release of Project Rider. At least EAP is already available
It is my belief that Windows Forms will outlive WPF and/or XAML. Currently WPF is still the "modern" way to doing Windows applications, but at some point Microsoft will come up with some new scheme, and all the WPF developer will jump onto that.
The developers that have still not abandoned WinForms will not jump shit when the next thing hits either. To keep the huge group of WinForms developers happy, Microsoft will continue to support and develop Winforms. For many, WinForms are still the fasted way to develop simple applications.
So if you want develop for the Windows (desktop) platform, WinForms will be a clear winner for many many years into the future.
WinForms are still the fastest way to develop complex applications as well. From soup to nuts, it's probably the pinnacle of front end development.
Trends started going from WinForms to web around 2000 as pushback against Microsoft so here we are today. I still think from a business standpoint, it was a very expensive thing to do.
Except that WinForms doesn't work on UWP applications other than via Project Centipede and is officially on support as it was communicated at BUILD 2014.
Of cause. My point is that so many business applications are Windows Forms, and will continue to be so for many years to come.
Removing WinForms will disenfranchise a large group of developers, that for one reason or another, it could just be stubbornness, won't switch. Microsoft will sooner kill of WPF than WinForms, to not lose those developers. At least that's my belief.
"Windows Forms is continuing to be supported, but in maintenance mode. They will fix bugs as they are discovered, but new functionality is off the table. Oh, they stress that it isn’t called “WinForms”."
As I don't have time to search for the video on Channel 9.
For native UIs on Windows, there are few better choices. Most line of business applications tend to be spreadsheets on steroids with lots of custom business-related functions, and for that kind of thing web-based UIs are far more difficult to deal with (think displaying thousands of rows, potentially 100+ columns, quick grouping and pivot-table-esque functionality, etc). You can do it with something like Electron and some open source JS libraries, but as someone who has gone down that rabbit hole it is a nightmare compared with using an out-of-the-box component from Infragistics, DevExpress, or Telerik that is much faster than a HTML/CSS/JS interface.
Absolutely not tied to the Store, you can distribute your appx packages any way you want and the users can install them with a double click. The only requirement is that the packages must be signed, so you have to buy a codesigning certificate or ask your users to install your self-signed certificate.
The app also doesn't have to be an UWP app: you can wrap bog-standard desktop applications (even Electron apps), HTML/JS apps (with direct access to UWP APIs) or even nothing: your manifest can just point to your live site, that can check if it's running in this context and then use UWP APIs, those are called "hosted web apps".
XAML is just a way of writing object graphs in XML. It is not tied to WPF or anything (heck, we use it to represent user data and style information within GraphML).
And yes, XAML is used by more than WPF at this point (Silverlight, UWP, and possibly more).
I'm curious if you've tried Visual Studio Code for Python on Mac?
I've heard good things about it, and just this past weekend tried it out, but couldn't even get a simple Hello World working on it for Python.
I installed the common Python extension for it (Don Jayamanne's), but seems like it couldn't Link with the Python toolset. Ie, I could edit scripts with Python highlighting and autocomplete, but couldn't get them to execute. I gave up after an hour of trying and went back to pycharm.
I'd be interested in giving it another go, but worried if I'm needing to bang my head just for a Hello World, how bad it might get later on.
I first tried VS Code for Python programming on the Mac last weekend, made it my main Python editor/ide by now. I found it extremely straightforward to set up, esp. the debugger is usable in a very intuitive way.
Have you set the configuration setting for the python executable?
Did the hell just froze over? That is one thing I never expected.
.NET open source and officialy supported on *nix, some version of SQL Server on Linux, cross platform Visual Studio Code editor, now Visual Studio for macOS...
Indeed. I've been using XS on Mac and VS on PC to build a cross-platform product - with shared code for 'core' calculations + view model stuff and dedicated GUI pieces for each platform - for a couple of years now, and I've been sometimes very frustrated by the restrictions of XS compared to VS; it made the Mac side of things much slower to develop. This, and other recent moves by Microsoft with respect to Xamarin, have made me happy (and, yes, somewhat relieved), that I 'bet the company' on this kind of architecture.
There doesn't appear to much demand for our product on Linux - it's targeted towards quite a conservative and on-tech market - but Linux support would also be very welcome.
So we were told when Xamarin was bought by Microsoft and Xamarin Studio made free. I just wanted to be able to make Android apps on Xamarin from my Linux laptop. Miguel de Icaza said on a reddit AMA they were already using MonoDevelop with Xamarin Android SDK internally, and would be releasing it.
Till date I cannot find anything Android related in MonoDevelop (or its available Add-ins).
Yes, that was also the analysis for the recent Linux layer in Windows. At the same time Powershell was ported which allows Windows admins to continue to use their tools.
This is mostly focused on Xamarin (IMHO they're phasing out Xamarin Studio), and Visual Studio Code already offers a good .NET Core developer experience
I wonder how this will affect Rider (the new jetbrains C# IDE based on intellij, currently in EAP) and if they will continue to invest in Resharper for multiple platforms, or focus effort on bringing the resharper functionality into Rider.
...I also can't help but think this sort of indicates that the C# tooling in visual studio code is being reconsidered; which seems reasonable, I was always disappointed by it.
That would be great news if the Microsoft c++ compiler was also part of it. Microsoft has added some amazing, in terms of performance, extensions to c++ lately, like coroutines support, that are not yet part of GCC or clang.
This is very good in general. Now if they only could make a version for Linux and get rid of this whole "application barrier" thinking. Microsoft would probably be the only company right now who could fight the walled garden, developer lock-in approach that continues to destroy personal computing.
But if the past holds any indication for the future, then I'm not holding my breath for it. They'll probably just use the Mac version to introduce slight incompatibilities or make the Mac versions of existing products kind of slower and buggier than the Windows versions. :(
They said it's based on Roslyn, so it should support VB.net. In the current release of Xamarin Studio for Mac, VB is supported in GTK and console applications, while Xamarin targets only support C# and F#
Given that "Even core features such as C# editing, Xamarin.iOS, Xamarin.Android and ASP.NET Core are implemented as extensions", I assume it'd be relatively easy for them to implement, especially since VB and C# effectively compile the same.
Presumably, C#, Android, and iOS are their top priorities to support, things like VB are less so, but could be added later.
I was really excited until I realized that. A C# IDE on macOS is cool, but damn I would kill for a better C++ experience on Apple. Xcode is ok, but it's code completion is buggy and there is no refactoring support... I've heard CLion is cool but I'm kind of poor and don't have the spare cash for it.
Eclipse CDT has definitely getting better - quite happy with it now for C++ projects. Code completion and navigation works well and you can just import a makefile-based project.
I really like CLion, you should check it out, there is a 30-day trial. Qt Creator is also a decent IDE for C++, which is free (the Qt Open Source version).
Pricing for developer tools tend to look more reasonable when you amortize it over the year and put it in Starbucks terms. Same thing for hardware upgrades. There's no real excuse for not investing in good tools.
Indeed. Even the full JetBrains package is good value even for a hobbyist programmer. People spend thousands on fishing rods and golf clubs for their hobbies yet when it comes to software they scoff at anything over £30.
Having said that I think a community edition of CLion will be needed in the not too distant future as the competition in the C++ IDE space is improving with VS now free and Qt Creator.
This is a huge step forward, as this means I don't have to buy a new laptop just to work with C# (I've been waiting for Black Friday just for this).
I wonder if this means SQL Server Express is coming as well, but that doesn't matter as much to me since I think Entity Framework takes care of working with a different database engine
Let's put this in context. At the moment the main thing that's happened is Xamarin Studio got rebranded as Visual Studio for Mac. If you look up the features of Xamarin Studio you'll have your answer, at least based on the current features.
Wow, i would've loved to see the look on people's faces if in, say, 2002, they could've glimpsed this headline from the future. Nobody would've believed it. How things change!
As a lover of VB.Net I am perennially disappointed with how little they seem to care about porting it solidly to other platforms or in general for that matter.
Ah ok. So now there are three (four?) completely different products branded as "Visual Studio"? 1. The original, real Visual Studio for Windows, 2. Visual Studio Code (a fork of atom?), 3. Visual Studio for Mac, (4. Visual Studio Express?)
VS Code uses electron, but is not a fork of atom... It's actually very fast and fairly nice.. the ui started similar to atom, though I wish I could take back my request for tabs support and go back to the non tabs ui.
This Mac version is a rebadge of Xamerin Studio, with some redesign and tooling improvements.
VS Express is really a stripped down version of VS, and the community edition is usually a better option.
I don't see why it would preclude it from cross-compiling WPF apps. WPF is just a bunch of assemblies - they only run on Windows, but if you're building your own code against them, only the metadata in them needs to be loaded by the compiler.
Of course, you'd still need the assemblies - they could be extracted from .NET redist, but it's not easy.
Same feeling I used to get when the recorded voice on the phone line said "We're sorry". The feeling that no one was actually sorry, except the person getting the message.