Hacker Newsnew | comments | show | ask | jobs | submit | shawnb576's comments login

I worked with Anders for years (he was on my interview loop, still no idea how I got in), and no, he's not the father of the CLR. That project (Project 42, COM+, NGWS, among other names) I think even pre-dated his arrival at MS. Anders was pretty focused on C#, which _was_ a direct consequence of the dust-up with Sun over Java. The MS Java libraries (WFC, etc), COM+, and C# sort of all converged to become what is considered CLR+.NET today. When VJ++ was killed via the Sun suit, the CLR was already in existance, but still pretty rough. It was a solid (fun, but painful) 3 years between when that happened and when .NET 1.0/VS 2002 shipped.

The "father" of the CLR is probably Mike Tougtangi, or Chris Brumme. I'm not exactly sure who was on that original team.

-----


Who killed VB6 and thought VB.Net would be a good idea? I remember a long gone Channel 9 video where the VB guys had arguments for .Net and showed a demo of the third part transition tool, that ship as Lite-version with Visual Studio 2002/3.

-----


The idea was that a converged, single, multi-language runtime and framework was the way to go. This is the way the culture works in Redmond - that's considered "strategic thinking", which is only exists in that bubble. It wasn't that the plan was bad, it's that it was unrealistic to think that the users of a very mature, very scenario-focused product would be happy with a V1 generalized product that was sorta-kinda the same thing. This was obviously false, and I actually fought against the porting tool because I didn't think it would work (it didn't), and just piss people off (it did). To keep VB6 alive until VB.net matured is anthetical to the way the place runs. Newer is _always_ better, even if it's not, and there lacked any incentive in the comp system for people to take care of the existing userbase. So, that's what happened.

-----


> there lacked any incentive in the comp system for people to take care of the existing userbase

Can you explain that? Are there incentives to do other stuff?

-----


The basic system - no. As of 2012 when I left, comp was about what your management was interested in. Maybe "take care of existing users" is on your review commitments, but what's really on there if you want to get ahead is: ship features, drive future business, etc. As such people wanted to work on the next new thing, it was relatively bad for careers to be on existing old things. Massive variations between groups, but basically that was the culture.

-----


Thanks for sharing your experience, it helps to understand some of the business decisions.

-----


This really puts into focus the meaning of "strategic thinking": abandoning current customers in favor of potential future customers.

-----


Honestly there were a lot of bad design decisions in VB6 that needed breaking backward compatibility. Requiring to use "SET" to assign objects, parameters being BYREF by default, inconsistent base (1 or 0) for various collections. VB.net isn't that difficult to migrate to and when one gets used to it, it's hard to have much nostalgia for VB6.

-----


Where you there when generics got brought in? What was that like from MS's side?

-----


Yeah - this is where Anders shines. Generics were brought in primarily to avoid boxing/unboxing costs of value types, but really are incredibly powerful when used properly. And I'll say that generics in C#/.NET are done very well. Every time I use Java, I want to use generics until I remember Java's generics are confusing and almost useless.

I really miss having them when doing work in other languages, even with some of the covariance gotchas, they're just amazingly good.

-----


I thought the CLR team had no plan/way to ship genetics in the CLR, and C# would have had to gone with erasure, like Java. That the first C# with generics came from across the Atlantic. Isn't that so?

-----


I also note that this idea that people care about animals and not the other way around is hard to respect given which direction the killing typically goes.

For the most part animals ignore people. For the most part people kill animals.

-----


A lot of dogs could absolutely kill you, but they don't. My cat could be 100 lbs and she'd still be more interested in kibble and pillows than mauling me. So, no not all animals are sociapathic (I'm not sure that's the right word in any case). Wild animals are wild and do wild animal things, they absolutely live and die by a different code.

But this whole argument, I think is spurious. Simply because they follow a different set of rules doesn't mean that they don't deserve our caring and protection.

Your argument sounds a lot like the one people make about other groups of people who find themselves morally superior versus another group. Replace "animals" with "muslims" or "jews" or "communists" or whatever and it may sound familiar.

That we recognize and respect that animals have needs and feelings even if they are not beneficial to us is exactly the point of this.

-----


By your argument the cost of food/goods are going up because we're becoming less efficient at producing it. This is not how inflation works, and that's not how deflation works.

Maybe go bone up on your basic economics?

-----


Basic economics are just, like, what the man wants you to think, man.

-----


No... That isn't my argument at all. I guess you should go back and read my argument again because you clearly can't understand it in a single go.

The reason the prices go up are because the fed absorbs all of the technology-powered reductions in price in order to maintain an artificial level of inflation.

You are a baboon.

-----


I find this argument very uncompelling. This product sounds like it never exited mvp if sql server free was to be good enough. I'll agree that at small scale there are lots of potential technologies. But it also makes me wonder what was so challenging here that they felt they had to keep swapping around. I won't speculate here.

However I disagree with the premise. At scale this absolutely matters. Different databases have vastly different design goals or cap theorem attributes. You'd better pick the right on or it'll impact your customer xp.

-----


> At scale this absolutely matters.

So worry about it when you get to scale, not before. Most products and startups never reach the scale at which such basic technology choices matter; their time is better spent growing the company (adding user features, attracting new users, etc) than fighting with technology.

MySQL was never designed to scale to the levels which Facebook and Google use them. And yet both companies are still using them today, even as the roles they use them in diminish as better technologies are brought to bear.

Technology shortcomings can be worked around, so long as you're still in business.

-----


> So worry about it when you get to scale, not before.

You are advocating people to spend man years of work just as their company start to get sucessfull porting their codebase, instead of stopping a couple of minutes earlier on to correctly decide what tools they'll need.

-----


> instead of stopping a couple of minutes earlier on to correctly decide what tools they'll need.

Based on my previous experience, I'm advocating exactly that. The reasons are:

1) Bikeshedding - Everyone knows that N technology is better than M technology, even if nobody has real work experience with N

2) Starting with a new technology that people have only done side projects in costs significantly more time than just a few minutes.

2a) Nobody knows what those costs will be before hand

3) By the time you actually have to spend those "man years", committing resources those "man years" of effort won't negatively impact your ability to reach new customers.

3a) This work will not be required "just as their company start to get sucessfull", but at some point after that.

3b) You will most likely not have to do a full re-write all at once. Small refactorings of the actual pain points is how its typically done (there are plenty of examples of this in the marketplace today - Google, Facebook, Twitter)

4) Premature optimization, YAGNI, DRY, etc. all apply to your technology choice as much as they do your codebase.

-----


For some values of startup, this all makes sense. If you're trying to solve a hard computing problem (not a social problem) then you have to write your own code. Turns out open source rarely gets around to hardening their code.

The you get to decide - jump aboard some community and try to help them get the code where you need it, or write your own. The community sounds nice, but the thrashing there can add more work than it's worth - tracking (incompatible) changes, arguing for your APIs and layering etc. Doubles the job at least.

Then you write your own. Again time wasted but not how you think. With examples of working code and/or a good idea what you need, you write it and it works. But you spend the rest of your life defending your decision. Every new hire naievely says "you could just have used node.js!" And you try to walk them down the design points but its almost hopeless.

I'm maybe a little depressed about this right now. My startup just got refinanced under the condition we use open source for everything, and write our app as a web app. Which of course is a pipe dream, since our app does hard things using carefully written low-level code, complex network strategies and heuristics hard-won from a thousand customer issues met and solved.

But no! Chrome can do all that, for free! So I'm asked to quit being a not-invented-here drudge and jump on board.

Anybody hiring?

-----


Ugh. Good luck with that... Maybe you can pull a "Breach" and only use Chrome as your, well, Chrome.

http://breach.cc/

-----


nFaithInHumanity--

-----


I'd like to know if these sounds are learned, instinctual, or a mix of both. IOW, if you placed a monkey raised elsewhere into this environment would it know and/or adopt these sounds.

Isn't assigning meaning to otherwise-arbitrary symbols/sounds a key aspect of language?

-----


There is some non-arbitrariness to language.

http://en.wikipedia.org/wiki/Mama_and_papa

http://en.wikipedia.org/wiki/Bouba/kiki_effect

I guess it gets arbitrary at complex enough levels though.

I'd hazard a guess that hok and krak have some component of instinctual/physical nature to them. I personally think that certain sounds are related to physical experiences or expressions of emotions. Obvious ones are surprise of "Oh!" with an open mouth.

"Hmmm" whilst thinking or concentrating, frowning and closing your mouth.

I'm currently watching my son learn to speak and his verbalizing seems pretty closely tied to his emotions at the moment.

"Oishii" means delicious in Japanese and it seems something that makes sense to say whilst you are smiling at enjoying your food. The long "ii" vowel to rhyme with the "e" of "she" in English.

-----


>"Oishii" means delicious in Japanese and it seems something that makes sense to say whilst you are smiling at enjoying your food. The long "ii" vowel to rhyme with the "e" of "she" in English.

Honestly, I don't necessarily agree. For example in Japanese 'iie' sounds very similar to 'yes' or 'yeah' but it actually means the opposite, it means 'no', whereas 'hai' means 'yes'.

If we want to talk about individual phonemes caused by emotional reactions, there might be some truth behind what you're saying, however as soon a we enter the realm of "this word sounds soft so it's positive" and "this word sounds hard hence it's negative" everything collapses.

-----


Obviously you can find tons of examples of words that are different in different languages. I probably confused the situation by bringing Japanese in. My son is Japanese so we talk in Japanese to a baby. I wasn't trying to compare languages. I was trying to talk about baby words.

Yeah it falls apart at any level of complexity.

I just think there are certain cases in often used words and words that babies say or hear a lot at first. Like the mama/haha/papa/baba words. I'm talking about a 'language' in the same way the article talks about an animal language. Like a few often used words linked to emotional states.

I don't mind if you disagree I just happen to believe oishii may be one of these words.

-----


all predicate adjectives in Japanese end in -i ookii, chisai, mazui, etc. the -i is the suffix indicating it is a predicate

oishii thus means "is delicious" - you don't need a "da" after it

-----


I made a mistake by bringing Japanese into it - see my other comment. It's the language we speak at home so the one I use to talk to our son. The emphasis was supposed to be on baby-talk not foreign languages.

-----


For humans, because of the diversity of sounds we can make, apparently the only thing approaching a universal word is "huh?" [0].

It'd be interesting to see how many sounds a monkey can make. If it's a very low number there'd probably be more universally used sounds but I'd imagine it's probably greater than most animals. I couldn't find a useful way to search for that. Unfortunately a lot of the results tended towards the "What sound does a monkey make" type of response and I'm not versed enough in linguistics or monkeys to query more efficiently.

You can see on the map [1] that the Ivory Coast and Tiwai Island are more than close enough for the monkey's languages to have split off at a much earlier stage and evolved differently. I'd assume this is more likely the case.

So I'm going to assume no, a monkey taken and raised elsewhere probably wouldn't instinctually jump into the trees upon hearing a "krak". But even the article states there are more experiments that need to be performed (although that particular experiment is a little insidious considering the intelligence of the animals you're kidnapping from (maybe if you saved one whose parents were incapacitated in some way)).

Of course, you could just go to the zoo and yell "krak!" at some of the monkeys and see what happens! Might get you some weird stares! ;-)

[0] - http://www.newstatesman.com/martha-gill/2013/11/what-one-wor... [1] -https://www.google.com/maps/place/Tiwai+Island,+Sierra+Leone...

-----


Well, it seems strange to me. The article you link seems to suggest "huh" might be a universal word, but in the details, they show that it's pronounced differently all over the place. For example, it tends to be closer to "ah?" in Mandarin.

-----


I'm suddenly very curious about the universality of the similar "uh-huh".

-----


I think that depends on what you call 'meaning'. Is meaning an intuitive, instinctive representation of something?

What I mean is, for every word that exists, do those words trace back to a reality origin through a pattern of substitutions? Substitutions of symbol to reality are really just an associative relation. Substitutions between symbol to symbol are functionally operative the same way in which a substitution between reality and symbol works.

So then my question is, is language really anything more than remembering that the cherry came from the tree? Once the cherry is disconnected from the tree, we have two things - cherry and tree. But before we distinguish them as parts, we recognize them as a whole. When I walk away from the tree, taking with me, the cherry - what happens if I still use the tree in my mind to represent the concept of fruit? It's a choice function. Does it matter whether I remember these things using sounds, symbols, images, experiences, or feelings? Language is interpreted and expressed across and using all of these domains. A poem carries greater meaning than the words do individually, and that is because there is emotional association that maps to the selection of words. We don't really call 'emotion' language, nor do we call 'art/music/math' language, yet these things arguably can have a strong influence across how we 'know' what language represents.

-----


I don't believe this is the question you should be asking.

Your worth as an engineer, over time, will be measured by breadth, not depth. I know engineers that have be come industry-class experts on font rendering or compiler optimization. You know what happens? It's tough for them to switch jobs.

For the most part this industry, at the most successful end, values rock solid fundamentals and versatility.

Your co-worker's path is dangerous. Replace "Microsoft" with "COBOL" or "Fortran" or "Mainframe" and you'll see that this is a problem.

At this stage of your career, you should be focused on learning how _software works_, not a particular stack. Stacks come and go, rapidly. Technology changes. But software and how it works doesn't change all that much.

So pick the right tool for the job and build your skills portfolio. Write some iOS apps. Write some browser-apps backed by Node, or Go. Write some client apps, build some APIs.

I will say, however, that if you are interested in startup stuff I'd recommend against the MS stack. My company is partly on MS, partly not, and the licensing issues with Windows become painful very quickly just for flexibility. Windows licensing and things like Vagrant don't get along very well if you want to have N flavors of a VM and use them at will. There are other reasons here but I'll tell you that Linux machines are just a lot easier to manage in general, and I'm historically a Windows guy (I worked at Microsoft for a _long_ time).

You're still in that stage where you don't know what you don't know. You're not choosing a wife or a house here, just go play the stack field.

-----


I agree with your point overall but disagree with one point: >Your co-worker's path is dangerous. Replace "Microsoft" with "COBOL" or "Fortran" or "Mainframe" and you'll see that this is a problem.

The coworker is described as being "in his fifties." It's probably not unreasonable at that point to have significant depth in a single ecosystem. COBOL was exactly the case I was thinking of: very few (if any) new COBOL programmers are being minted any more and there is still a lot of legacy code. It will probably be a good, if dull, career for the next 20 years for that coworker who will at the end be able to write his own ticket to fit around his almost-retired status.

For someone with 40-50 years of working life ahead of him/her, this sort of specialization is indeed premature and dangerous.

For me that kind of specialization has never been attractive, an attitude that has served me well. But that's not true for everyone, at every stage of life.

-----


To be fair, COBOL stopped evolving a long time ago, while .NET hasn't. He's in his fifties, but he's not working on legacy code; he grew up with the tip of Microsoft's technology. For example, he has pet projects on Azure and just completed a project at work in .NET 4.5 and Entity Framework 6. Sure, it's still a lock-in, and potentially dangerous, but nowhere near as COBOL or Fortran.

-----


Yep, you've got it. So my point was directed against pre/chosen specialization for more junior engineers.

I should say that some level of specialization over time is natural. At some point you'll likely work on one thing for an extended period of time, and you'll build a bunch of domain knowledge.

In my case, I'm fairly biased against developers who frame their skills in relation to a specific technology - .NET Software Developer, Java Software Developer, iOS Software Developer. I really want people who think of themselves as Software Developers first and don't worry too much about the stack/domain.

See: Software Athletes.

-----


In short: no.

Today's programmable Internet is an existence proof of this answer.

In the 90s and early-2000s there were a lot of attempts to codify machine-discoverable protocols: WSDL, SOAP, etc. There are still a lot of people that are sort of obsessed with this idea. Three things have happened:

(1) the weight and complexity that's come along with those protocols has been larger than the value they bring. Auto-generating proxies just isn't that helpful when you still need a human brain to figure out how to connect things together and make them useful.

(2) The growth and success cloud-based apps shows that better protocols are not a necessary factor on today's Internet. This does not deny that better/faster/richer protocols could improve things, it's just not clear that's a critical-path problem. It's just not hard to look at REST docs and wire shit together, regardless if it's optimal or not.

(3) Layering app-level use cases over HTTP (HATEOAS, OAUTH) works pretty well when you're navigating known domains.

-----


I did a fair amount of experimentation with phone gap/Cordova a couple of years ago. What really killed me wasn't so much perf as places where you want or need native UI. For example, if you want a nice mapping experience, JS based maps suck on mobile compared to the native map widget.

Based on this I think the Xamarin model makes a lot more sense in today's mobile landscape. Be smart about building shared logic and view controllers and wire it up to fully native UX.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: