It can be run while Xcode is open; it'll just reset the tree view. It basically alphabetizes everything, so there is one canonical ordering to items that you add to the project.
When you have a consistent, established framework for managing a part of your program that provides an easy to read, visual overview of that part of your program for new developers (or old developers who haven't touched the code in a while) and which can avoid thousands of lines of meticulous declarative code, you should have a better reason for avoiding it than pompously declaring that it's tricky to maintain computer generated XML in your version control system.
For version control and XIBs... you should keep them as decomposed as possible (XIB files support soft links to sub-controllers allowing parents and children to be in separate files) and once fully decomposed, you shouldn't need to merge XIBs -- just treat them as monolithic. If you find yourself needing to merge then you've most likely failed to decompose your XIB enough or you've got too many people performing overlapping roles.
If you need programmatic view adjustment, you can implement -layoutSubviews (iOS) or -resizeWithOldSuperviewSize: (Mac) you can handle programmatic reflow or other code-based tweaks if absolutely required
Additionally, since a XIB file is a cached, serialization format, every view instantiated from the XIB after the first one loads faster than constructing the view in code.
I know nothing ultra-secret is given away, but if I were looking for a contractor, I'd be concerned that such a person might perform a post-mortem on my company after leaving as well.
Secrecy is stupid for these things.
So what are you specifically objecting? My saying that I doubt he cleared it with Google? I'm entitled to my doubts, thank you :-)
I think the only things I didn't hear before were some of the specific toolnames.
edit: http://google-engtools.blogspot.com/ here's their engineering tools blog.
http://www.youtube.com/watch?v=2qv3fcXW1mg and a talk
http://www.youtube.com/watch?v=b52aXZ2yi08 and another talk
Really cool stuff.
Ah, yes, and Java and C++ are all flowers and unicorns flying around.
After 10 years as a C++ and Java developer, and 3 years of Objective-C, Objective-C is by far my preferred language (although it's an absolute must to use with an auto-completing IDE due to its wordiness).
Plus isn't syntax mocking the same as editor mocking? Expected and pointless.
e.g. the primitive vs. object distinction, the lack of closures (gcd is a poor replacement) and the way they allow a rethink of control flow, the neat things you can do because the stack is an object (restart exceptions with values! coroutines!)
I'm fine with the compromise, I just wish it was different.
I had hopes that MacRuby would turn into a supported systems programming language, but it's not to be.
I suspect the author's experience might be because he worked in an office where teams are located in closer physical proximity. It can be socially difficult to use an iPhone when the local Android team sits just down the hall.
Do Googlers typically use experimental Android builds?
Googling for "android kernel panic" gives me pages about overclocked custom roms (and a MacOS X inspired game), and "PagerService has stopped" gives me exactly zero hits.
Are you seriously trying to claim that there is a greater risk with a Nexus 4, for instance, of a pager service stopping or a kernel panic? Your post seems utterly ridiculous.
I never had this issue on my iPhone(s).
Regardless of which device you like to use, ignoring its faults doesn't help them get fixed.
Though I don't think that was what they were saying. Their dismissive about Android tone (incl. the battery) means I suspect that they really are animous towards the product. Which might seem odd for a Googler, but actually it could easily be explained as someone who was rejected from that team, dislikes someone on that team, etc.
Seriously? It’s impossible for a Google employee to not like one of their products?
That would be dishonest, no need to ascribe it to malice or retaliation.
This should be true at every company. How can you improve your product if you think everything you do is just perfect and there is nothing to be improved?
You can get a Windows machine but you need to jump through some hoops and paperwork to get it.
And no, the Nexus 4 wasn't an option for the Xmas present this year. Not totally surprising given the obvious trouble LG's having keeping up with demand.
Completely unrelated: I'm in love with the carpeting in the London CSG office. I've never felt more compelled to take my shoes off in my life.
Is Perforce really that obscure?
- Defining the method signatures in a header file or an @interface section, then repeating the same method signatures just to implement them in @implementation. I just think it's an awful waste of effort.
- The idea of categories. The way I see it, there doesn't seem to be a reason not to subclass since categories won't make sense anyway if you don't include the original class in #import.
- @optional directive in protocols. I think of protocols as Java interfaces, and I assume that's the intended purpose for having them, so having an @optional directive seems quite pointless to me.
- Obscure variable scoping. I find myself having to memorize too many visibility rules--ones that apply to the objects themselves (what attributes they inherited, local/block scoping, etc.), ones that apply to files themselves (defining instance variables within @interface, global BUT in-file variables within @implementation, the extern keyword, etc.). This is much simpler and elegantly done in Java where you don't need to switch between thinking of your program as a bunch of interacting objects and as a bunch of files.
- Which makes me think that Objective-C just doesn't have good OOP in the first place.
- Conditional compilation. When is this ever useful? I just can't visualize having to write this (maybe for games?), but again, I never had to do this in Java.
The method signature issue is a direct result of the C history. Far easier (at the time) to tell the compiler in advance about signatures than to force it to compile everything twice to find them (which is, roughly speaking, what Java does, except the second "compilation" happens at run time).
ObjC is a truer OO, IMO, because it focuses more on objects and message passing rather than method calls. As Categories and optional protocols show, this provides a far more flexible approach.
The conditional compilation comment confuses me and makes me think you don't really understand what's happening under the hood. When you compile Java, you get byte code tat requires a VM. When you compile ObjC, you get native code. Now imagine dealing with an architecture as different as PowerPC and x86. All those differences were conditionally compiled into your VM. In ObjC, you need to deal with them yourself (and all the other positives and negatives that implies).
ObjC is a purer form of OOP than Java is.
Not to mention Java has it's own quirks in my books: no unsigned variables, no typedefs, calling a function on a null pointer creates an exception instead of returning null (what's up with that?), etc. And don't get me started with how Android uses xml (poorly).
Really with languages I think it's primarily what you're used to.
I think android's ability to use folders to support different layouts and resources beats the file naming conventions necessary for supporting the same scenario in iOS... i.e. myLogo~ipad.png
I can change the underlying grid size, padding and item spacing for every screen in my entire app in one line. Have fun doing that in iOS.
Categories can be nice to avoid subclassing. Subclassing can become a nightmare when one controller extends another which extends another which extends the Google Analytics one. There's so much hidden behavior and you can't even easily disable the log spamming Google Analytics one easily. Not compiling in the init code gives you errors in the log all over.
It's similar in spirit to "monkey patching" in Ruby and similarly dynamic languages (but won't allow you to replace existing methods, so not as dangerous).
My understanding is that you can create methods with the same signatures, but which one will end up getting called is completely undefined, making it a worse idea that it usually is.
It can be very handy thing though, I really like it.
Conditional compilation is a terrible, terrible thing but for the opposite reason you think; it's just too useful for too many things and almost inevitably leads to shipping code that's never/barely been tested. Java left it out to protect WORA, but the only reason you can really (sort of) live without it in performance-critical apps is due to the JIT doing the same sort of things for you invisibly.
I don't know Obj-C well at all, but no arguments on the rest :)
There are a lot of pros and cons of both. I used to spend a not insignificant amount of time trying to moderate debates between iOS engineers who favoured one approach versus the other.
Doing all your layout in code isn't inherently bad, and there are lot of Apple written apps that do this (conversely, some of the newer iOS built-in apps do use NIBs). The main problem that I've found with layout entirely in code is that whilst it's fine for you, the sole developer, once you bring in more people onto the project you can have
problems with getting people up to speed on what exactly is going on where.
Of course, the solution to this is to enforce strict coding standards over how to layout the views themselves in code, which Google clearly do. And as the article points out, resolving merge conflicts in code is somewhat more enjoyable than in nibs.
That said, just as programatic layout isn't inherently bad, neither is leveraging interface builder strategically. Here's a good example: iPhoto on iPad has a completely custom interface that's mainly laid out programatically, but certain key elements are actually composited together in IB. For example, the brushes that slide up when touching up photos are being brought in from NIBs, but animated and manipulated in code. Using the nib file to load in the images reduces the code without sacrificing understanding (or at least, that's Apple's argument. There's a fantastic WWDC 2012 session that covers how the iPhoto UI is put together in more detail).
The TLDR; - the only risk of programatic layout that I see if developers going 'off piste' and laying out in a non standard way. With the right coding standards you should be fine.
To me that implies that one essentially makes the other redundant but I prefer to see them as two powerful tools in my box that each have their uses. I've always thought the people that insist on doing everything in xibs were a little weird but going too far the other way doesn't make sense to me either.
Do you happen to know the session name or number?
- easier version control and code merging
- you don't need Interface Builder to review the layout code
- code is easier to search, e.g. for the use of certain controls (Xcode doesn't support searching XIB files)
- code can be parameterized (you can e.g. use global constants for font sizes, colors and margins)
- layouts that follows certain rules (fixed heights and margins etc) are sometimes easier to build with code,
especially when the number of visible controls is variable or some controls are only optionally displayed
- you can easily refactor aspects of the layout into reusable components (i.e. functions or classes)
Of course there will always be cases in iOS apps where layout in code makes more sense though.
Just looking through my open projects right now, all of my ViewControllers have a buildUI method right under ViewDidLoad.
I haven't made many more iOS apps after that, so I can't really say that with lots of experience, but still. As long as you're mindful of how you structure your code, it's not extremely difficult to maintain.
Is the part of instance variables having to be sorted alphabetically really true? I did a quick search but only found the C++ style guide, which says nothing like that.
It sounds absurd, instance variables should (imo) be grouped logically, not sorted by their names which are not very relevant when it comes to which belong together.
If I have a buffer, and a length, I'd like to keep them together in the code since they both are part of the same thing. The length, however, is likely some integer type that won't need mentioning when destroying the instance.
Not to mention packing, which if you are compiling with enough warnings turned on, can get annoying. I like to use http://google-styleguide.googlecode.com/svn/trunk/cpplint/cp... on my code, but many of the warnings are more style preferences than anything (like where braces go). Still, I agree with most of their style guidelines, and it's nice having checks for things like unnecessary includes, missing includes and missing idempotency preprocessor guards.
Edit: BTW, yes, I'm talking primarily about C++; haven't done enough ObjC to know about packing there, so YMMV. As for dealloc arguments, again, I don't know how it's done in ObjC, but in C++ I think it's irresponsible these days to not be using something like shared_ptr<> if you are dealing with heap, thereby eliding the need to even handle deallocs.
In some projects I've worked on, we allow logical groupings, provided comments that describe what they are, but then require alphabetizing within the groups.
> dealloc should process instance variables in the same order the @interface declares them, so it is easier for a reviewer to verify.
One engineers idea of 'logical order' may not be another's, so alphabetical order seems as good a sorting method as any. And since the article itself doesn't link to the style guide, here it is:
Google keep it regularly updated, so it includes newer developments like ARC and the modern literal syntax.
(Also, line lengths merely have to 'try' to be less than 80 characters, so there's wiggle room here too.)
I don't see anything offhand in the internal ObjC guide about alphabetical order for ivars, so it's possible the OP misunderstood the "dealloc in declaration order" rule. It's also possible his specific project had a convention of sorting them alphabetically.
Personally I generally organize by kind: model (including state), and then interface (views, etc), and then controllers (including helpers).
Isn't this backwards? The code should be reviewed after I merged master in my branch, the merge can introduce any number of random issues and conflicts that may substantially change the code being submitted.
Besides, I think the 80-chars limit (especially in ObjC) is ridiculous, probably just put there by terminal diehards that make life harder for everyone else.
(bonch, you’ve been hit by the hellbanning BS: your comments are invisible)
My users in apps I launch with the new style guidelines have horrendous times due to not noticing the main action is often an unlabeled icon with no chrome in the top right action bar, etc.. Often they flounder around in the app doing what they can tapping on content directly and never even try or notice the action bar icons.
Do flat, chromeless, unlabeled icons look good? Yes, but they are about as unusable as possible. The worst thing is that we are forced to follow Google guidelines if we want to get featured, and their design guide is a piece of crap not backed up by user studies. So we developers end up implementing all these stupid work arounds, like a tutorial overlay the first time any screen is shown with a big freaking white arrow and some text pointing to the otherwise unnoticeable corner icons...
That's not to say a lot of 3rd party apps on Android aren't horrendous looking, because that is true.
Anyknow know what he is referring to?
Edit: anyone know what Git tool he is referring to?
Maybe I should check it out.
Fortunately, with VS2012 the built in merge-tools has become much better and allows you to do inline editing in the basic compare/merge-view as well. Since VS2012, I can't say I don't miss p4merge, but I miss it a lot less.
The (old-ish) version I used had only 3 panes, though, making p4merge superior for 3-way merges. p4merge was easier to use from the keyboard, too. But for ordinary merges, I much preferred Araxis.
Based on the screenshots, it looks like something developed in-house for Google security guards. I wonder if Google can actually put the right kind of expertise and resources into selling these sort of specific IT solutions to non-technical organizations.
Wow. That sounds crazy. I feel that there's an opportunity here to make a merge tool that would know how to handle .pbxproj files. The most common scenario I've encountered is when multiple people add files to the project - they get appended to a section of the .pbxproj file and consequently result in a (easily-solvable) conflict.
Part of me wonders why nobody at Apple has just gotten fed up with merge conflicts and solved the problem already.
Here's a recent diff of adding some files:
and the equivalent change in Gyp:
This is something I have a hard time wrapping my head around. I am not a designer, and wouldn't even consider myself particularly good at it. However, there is no shortage of evidence to support the value and importance of good design. Therefore, I don't understand why any company or team culture would marginalize it.
Can anyone comment on the extent of this culture at Google or elsewhere? Specifically, why it exists and how it's propagated.
2. Many PMs believe that all design is subjective, and therefore it's perfectly reasonable to substitute their judgement for that of the UX'er. So they argue.
3. Google is a uniquely bottom-up culture, where consensus is required to move forward.
These three things lead to design by committee, and it is extremely painful.
is it rietveld? http://code.google.com/p/rietveld/source/browse/
I use SourceGear DiffMerge as my default Git mergetool. Makes it pretty straightforward to merge .pbxproj files.
Unfortunately I can't edit or delete the original comment.
Seems like he's at least partly using it for some self-promotion.
Yes, he should have lied about the fact, as a real professional would.
However, there is a huge difference between "lying", and choosing good phrasing (or even omitting unnecessary information), and your comment strikes me as very naive.
And I would understand it too if a guy had to quit working for me due to "out-of-control family reasons". Let's be honest: everybody would do it if they had to. Imagine your child being heavily sick or something. Would you continue working on some project when you are needed thousand of miles (or lots of hours) elsewhere?
As for the people that would mind those kinds of things, I wouldn't want them hiring me either. For one, they could have made my life hell afterwards with lawyers and claims...