This is such BS, I can't even read it without physically cringing. I work for a ~400-person business that stands up .NET websites at breakneck speed, and we do it well. People who blame their problems on technical infrastructure decisions almost ALWAYS do so because it's easier than addressing the true underlying problems.
"The biggest problem was they didn't allow the developers to have staging or testing servers -- they deployed on the production servers on the first go-around... And all this with no change management or control. No versioning either."
Oh wow. Wow. I hereby revoke my previous statement. These are some God-awful infrastructure decisions. Version control and a staging server are the most basic necessities for a scalable dev project. I even set them up when I'm working on a personal project, alone.
As for their actual infrastructure: not having a version control system and a build system (e.g.: never deploying directly from the VCS) is a must for any size team and sometimes even for single developers. Thankfully, nowadays most people do use VCS's but build and deploy systems are largely at the level of glorified shell scripts at best.
Management is not excluded from this either. They have to:
1. know, in depth, the business they are in and the tools they use (or could use) to achieve their goals
2. listen to what their team tells them and provide an environment where they can do the job right
It sounds like there was a lot of micro-management and putting out fires in this case.
For the record, I think it's irrelevant.. but absolutely I think it's true. It doesn't mean .NET isn't capable, or that lots of .NET programmers aren't capable...
Just that .NET is way more popular in the enterprise than on the web, and that the work that gets done at most of those enterprise shops wouldn't fly on large sites.
It may be popular to pretend that the .NET camp (and to a slightly lesser extent, the Java Enterprise one) doesn't bend this way statistically, but they do.
It is a fallacy to think that the web is the only large network that requires scale.
Goldman Sachs alone stores more data than the entire web
The Visa network has had 4 seconds of downtime in decades
Most airline systems took decades to engineer
Payment processing only deals with one type of data: money. This gives you as many shortcuts as the number of constraints it imposes.
Airline control and departure systems are probably as close to a typical modern web app as you'd get from your list.
My point is that while there certainly are engineers that have worked with high scalability issues without ever touching the web, they have also likely been solving slightly different problems.
P.S.: Other systems that require high availability but are not the web: telephony and cell communications, broadcasting and doomsday devices.
Edit: Oops, here's the link: http://www.forbes.com/global/2002/0916/038.html
Goldman Sachs told me when I was writing a proposal for them years ago that they have over a petabyte of data stored. The web is 80-200TB depending on who you ask. A single department there responsible for the program based trading would alone have an entire copy of the web (and parts of the deep web, and all of twitter, etc.) since they construct those whack trading apps that suck everything up and analyze it for signals. If there are any quants on here they could tell you about this more.
The airline system I was referring to is SABRE. Early IBM was built on their rollouts and we are talking about the 50s. V interesting story, lots of references from this page: http://en.wikipedia.org/wiki/Sabre_(computer_system)#History
It would be interesting to get more recent figures and compare again, because I do know that the investment banks hoard a lot of data.
In terms of just documents, they would easily store more than the web
(it was ~10x the web at the time)
Why would you believe that Goldman Sachs stores more data than can be reached by HTTP?! Only YouTube amounts to petabytes of data. http://beerpla.net/2008/08/14/how-to-find-out-the-number-of-...
the point is that it is frikkin big - and most investment firms now scan the entire web for signals
startups don't have a monopoly on working hard or working fast, and it is arrogant to generalize about both .NET and enterprise developers in that way since we are all in one way or anther standing on the shoulders of earlier enterprise work (where do you think what we call 'nosql' and think is new and grovvy, was first used?)
Second, arrogant really isn't fair because I certainly never gave an assessment of my own abilities or value.
Third, totally.. .NET stands on the shoulders of the same stuff the infrastructure that runs most of the internet stands on. It's just that .NET doesn't run most (or even lots and lots of) the internet.. so to suggest that the .NET development community is less likely on average to be ready to build Facebook doesn't seem like blasphemy.
That's very different from suggesting the .NET camp isn't full of awesome, hard-working developers... but really, I apologize if it comes off that way. Definitely don't mean to suggest it in the least.
Most of my familiarity with the .NET community comes from the fact that I live/work in San Diego, and in the Healthcare IT space... if the combination of which don't make up the largest percentage of .NET developers in a particular area and arena, it must be close :)
So most of my experience with the .NET community comes from knowing developers and working with .NET-turned-Java folks... It's true, I haven't done .NET in a very long time. That said, I made the same assertion (to a slightly lesser extent) about the Java Enterprise camp.. and I've got loads of experience with that :)
Ruby developers? (Ref: Twitter)
I think only HTML programmers are the only ones who are constitutionally constructed to create large scalable websites at a startup pace.
Also, it's probably fair to say that as a whole programmers are 'largely' not used to thinking or working in a way that's productive at scale. I didn't argue otherwise.
I just suggested that the percentage of developers in the .NET community who are aligned with values, knowledge, and experience essential to scaling a large site.. is smaller than many other platforms... Yes, I'd argue smaller than all of the ones you mentioned.
Exactly. This seems to follow the Kevin Rose formula: make horrible decisions (or be entirely absent from decision making) without any understanding of technology, then blame your developers and technology choice ("Digg V4 failed to due to Cassandra").
If it weren't for Facebook's success, I can bet you'd see people blamining PHP for Digg's failure: prior to Facebook, P in LAMP also stood for Perl (and occasionally Python) and LAMP wasn't "universally" considered proven (unlike J2EE + Oracle or .NET + SQL Server), nor has Facebook been even remotely close to a "vanilla" LAMP site (since at least 2005)-- with many mission critical subsystems also being built in Java and C++ .
The P in LAMP has been associated with PHP for much longer than the existence of Facebook.
I think he thinks it's a strategic advantage to understate the work and resources he's invested into pof. (Makes for a good marketing story and newbies think building a pof clone is easy so they waste lots of money trying.)
Right about the time that blog post was written peer1 had a marketing video on their site where there was a guy walking about a data center being interviewed by someone off-camera. As they walked around the guy waved at a few racks of servers and said these are plenty of fish's servers, walked further, these are [some other big web site].. And from that video it was clear there was no way he was only using 1 server. There were lots of servers in use.
I find it sad that these days MS is laggin behind. It is difficult to get Cache frameworks and other infrastructure working with MS Stack if you got a high performance website.
The Microsoft stack is not always just a Microsoft stack any more. e.g. it has jQuery out of the box. Of course you mix and match when you get to the high end. The idea of the MS-only shop is not as true as it used to be, many people are more pragmatic.
I find it sad that these days MS is laggin behind
Why, because MS didn't supply every single piece of server infrastructure software that So use, just the main ones? That's an odd definition of "lagging".
MS have a distributed cache called "Windows Server AppFabric" (formerly "velocity"). I don't know that much about it.
See I prefer sticking with one flavor of tools because its easier for developers to adjust. TBH MS does supply almost everything from grounds up. I only had to look elsewhere for advanced distributed caching frameworks. In fact before switching to Amazon EC2 our old datacenter was running MS VMM and our stack still didn't have anything other than MS software.
Where the open source choice is more functional, cheaper or just more familiar, it often gets used instead. This is good.
Connecting OSS tech and MS stack is getting easier but still leaves you with a lot of uncertainty.
This is an example: http://wiki.basho.com/Client-Libraries.html These guys have Erlang client but there is no support for .Net.
Cache framework's are absolutely necessary. The whole idea is to avoid a SQL Server hit and return a cached data object in memory. I think Stackoverflow is the best case study here.
SQL Server is the third best Microsoft product. Right after their natural keyboard and their mice lineup.
Remember when people used ASP.Net web forms and it was hard to get rid of viewstate in the rendered page, and only the people who knew internals of platform well could fix it. It drove SEO's mad and they started to recommend staying away from web forms.
I agree with you on the staging and testing servers. Not having them is planning for disaster.
I find this shocking and somewhat unbelievable. My wife, who isn't a tech person at all thinks that page load speed matters (she just called so I asked her) -- and sites gmail as an example of a page that takes too long to load (I think that "loading..." indicator actually brings their load time to the forefront, although it really isn't that long).
I just have trouble seeing a .NET developer saying "for high volume sites page load speed doesn't matter", when most people who aren't in the tech industry would concede that it does.
Most .Net developers are working on intranet sites. They don't have problem with large footprint pages.
When they move to internet and public domain websites, they are newbies. It takes them a while to adjust to the way internet sites are written. SEO optimizations, CDN usage, Ajax calls etc are pretty important on internet site than on intranet site.
Companies like 37signals are doing it successfully ;)
IMO, that's a very different statement. One is a difference in experience with a domain, the other is ideological.
I've read before that Facebook has shown that users tend to spend a fixed amount of time on their site. Once users hit that time limit, they're done. If your site exhibits similar usage patterns, the faster your pages load (even if they're already fast), the more users can get done on your site, which, depending upon your revenue model, may result in more revenue.
For example, MySpace was getting at one point 24 billion page views per month. If you could reduce each page view by 1/100th of a second you save 40 weeks per month for your users (assuming the model where they look at a fixed number of pages).
In the Facebook model, if you assume a page comes up in half a second, this delay results in a 2% decrease in page views -- which is a pretty huge deal when your business model indirectly revolves around page views.
I guess my point is that even for people who come from a background where page load didn't matter. it would take 30s to point out it does, and I don't think you'd get any pushback.
that sounds big and important, but it doesnt really mean much does it?
the difference between a 1/100th of a second and 2/100th's of a second is too small to translate to enough time to get any increase in productivity.
There must be someother reason that load time is that crucial.
You're conditioned to not care anymore, because ASP.NET Web Forms makes it so damn hard to achieve responsive web apps, and building responsive web apps almost universally means breaking away from the standard ASP.NET Web Forms style of development. You have to abandon pretty much everything that makes the platform convenient in order to get good performance.
MVC however is designed from grounds up to deliver speed and web standards compatibility. Building for web these days with Web Forms is just wrong. But yes, most to actually bend Web Forms to deliver requires wizardry.
oh and that viewstate forms thing from SEO is the SEO talking bullshit and trying to justify his job
I had the same reaction to SEO at first. See SEO guy is partially right. It involved page load times. Larger viewstate means longer load time and that gets penalized by Google.
Have you informed them that you work with a high traffic site? Most intranet sites are not high traffic, so if they're spending time pre-optimizing for speed instead of features/development time, they're actually wasting the company's money.
And what has viewstate got to do with SEO? You lost me.
For SEO friendly URLs, all you need a is URL Rewriter. http://www.google.com/search?client=opera&rls=en&q=a...
>only the people who knew internals of platform well could fix it
That's true of any platform out there.
Yes I talk to them that we have a high traffic site and we lose them because they don't know about building them. Besides you won't find many Microsoft stack sites dealing with scaling issues. I personally looked up Stackoverflow scaling case studies to design my solutions later.
Back in 2005/6 URL rewriting was still pretty difficult. I started out back then. I converted to MVC completely when 1.0 version came out specifically to address the SEO concerns.
But I guess our discussion is swinging to technical side :)
Edit: See sajidnizami's post for the relevant StackOverflow question. Though, this still doesn't make sense to me logically. Why would a search engine disregard a (reasonably) longer page?
And that, ladies and gentlemen, is why you don't want your software project managed by non-programmers. Any competent programmer would know not to do that.
Maybe they should have used Azure. At least that forces them into a staged deployment. (of course not that it was around when they started MySpace).
Yeah, that made me cringe too.
critical word there: largely. he wasn't saying all. he was making a generalization comparing large groups. individual cases may vary.
Once the free parking was pulled from MySpace, 50% of every team was laid off and all of the momentum was pulled from the company.
Working with .Net was not an issue, and in some cases it was a benefit.
There were however huge cultural problems with FOX. Here are a few.
Developers used were given one display, not two. Screens were 1280x1024. I bought my own displays and had my manager order a custom computer for me with dual video card support.
Fox was a closed source company, and so when we were working on Open Tech like Oauth and OpenSocial gadget servers, we had to do it as closed source software. We had no peer review. It made it really hard to ship and test something when you don't have linkedin, google, bebo, and twitter reviewing your code. On top of that when those companies found a bug, we had to re-implement that code in .Net. On top of that MySpace and .Net were well designed for strong typing and those types work a bit different than Java.
It didn't take a lot of time to port so, we kept doing it, but you have to think about this, you are at a race with a company like Facebook who had zero profit motive at the time, and billion in funding and a ground up stack. Meanwhile MySpace was just clearing out cold-fusion and we had really old blogging platforms that could not just get ripped out.
We also had management that didn't want to piss off users and so we would have 2 or 3 versions of the site out there, and we required users to upgrade to new versions.
What MySpace had was Cultural Debt, and a fear of destroying what they had. Facebook won because they were Socratic to the core.
I'm a developer at Leads360 in El Segundo, CA. We're hiring right now. I've already interviewed several MySpacers and have extended offers to a few. We hope to get more :)
Email me your resume, if interested: email@example.com
One thing though, what do you mean when you say Facebook was 'Socratic to the core'? I'm only aware of that in context of the Socratic Method of teaching, but am not clear what it means here.
So I assume he meant that facebook knew exactly what they wanted to be, as opposed to myspace, who was trying to catch up with facebook.
Can you explain what you mean by "free parking" here?
"free parking" is a spot on monopoly where the rules specifically state that once a user lands on the 'free parking' space no money is received. however, in almost every monopoly game i have played, players take fines/ taxes and put them in the middle of the board, and when a player lands on 'free parking' they get the funds.
It's an example of not following the rules as a point of culture.
The deal soured with Google because the terms were around pageviews and clicks. MySpace and Fox decided to target those terms to maximize revenue. The result was that MySpace added unneeded page flow for just about every user action. It destroyed UX and pissed off Google. Our team kept joking with management that we were going to create a MySpace-Lite as a side project from our REST APIs and to get rid of all of the crap.
WE SHOULD HAVE DONE IT. WE SHOULD HAVE CREATED MYSPACE-LITE
There was a write-up by someone about how everybody hate Monopoly because it drags on, and few people fully understand that the reason it drags on is because of house rules like Free Parking, immunities and the like.
I hated playing the board game as a child, but the console Monopoly games can have some strategy applied to them that have immediate payoffs.
I never had a MySpace page, I found that the disjointed looks across peoples profiles and seemingly no direction as to what it was to be used for rather annoying and made me consider MySpace as a joke. Facebook won me over in that they had a consistent layout, and no goddamn music playing when I went to a users profile page, and I can't even begin to count the amount of friends I have that agree with me, but I can count the amount of friends on one hand who have a MySpace page.
At hi5 I remember some nights working till 5am in the morning going home, and then being at work by 10am for a meeting.
At MySpace I tried to get a few people together on weekends to hack on cool demos, but management didn't want people to burn out. They did however buy a few people blackberries and expect them to remote in and fix anything that might be a fire. A few of our team members had them and they would have to log in a 2am and fix shit.
I am still very proud of the work I did there and respect every one of my team mates as fantastic developers. I hope to work again one day with many of them.
If there was one place MySpace didn't fail it was hiring great talent, and bringing them to LA to work. The partying in LA might have been a bit of a distraction but, it's also what made LA a ton of fun. It was so easy to date down there vs. SF where it's impossible to meet a gal.
Finding good talent that's experienced with huge scale sites is not going to be easy regardless of language. It's not like MySpace could have been RoR and suddenly everything would have been simple, at their prime they were doing a ridiculous amount of traffic that only a handful of sites ever had experienced. There were probably 0 people experienced with PHP at Facebook's level, they all had to learn as they go and what they learned was they picked the wrong language so they created HipHop, a hack to overcome PHP and probably hundreds of others that help them scale better.
I suppose that's true in absolute terms (nobody is at Facebooks level). However, there are definitely an army of really high traffic sites out there written in PHP, many of which predate Facebook. Problems of scale aren't exclusive to Facebook by any stretch.
It seems to me that Facebooks choice of PHP, in that context, was a big advantage. They've undoubtedly been able to draw on the experience that others have had at scale on very similar stacks. That might not have been as true for MySpace.
That said, MySpace had a whole host of internal issues. I briefly worked for a sister site at Fox Interactive Media and had at least some insight into what was going on over there. I'm sure someone will write a book about it one day:)
If they stuck with crusty old PHP, I have no doubt they would never be able to manage the load.
For some reason ppl think Mark created it on the 3rd day
It's not that easy to get top talent if you stick to MSFT platforms.
From linked article:
> Silicon Valley has lots of talent like him. Think about the technology he knows. Hint, it isn’t Microsoft.
That's the number one benefit of an open source stack: it's possible to know the stack inside out. If it's closed source, there's an eventual point at which you're dealing with a black box.
"use of the software within your company as a reference, in read only form, for the sole purposes of debugging your products, maintaining your products, or enhancing the interoperability of your products with the software"
PHP is actually a good templating language and a passable rapid-prototyping language. The fact is that it's more important to stay agile when you're growing exponentially then to pick some optimal technology.
PHP's simplicity makes it one of the best choices to be able to incrementally build a robust and scalable back-end underneath it as you go. ColdFusion and .NET I imagine to be some of the worst (though I have no experience with either, so maybe I don't know what the fuck I'm talking about).
Ever heard of JBOSS? Did you know they have an open source CFML project called Railo? Or that Chris Schalk, developer advocate, from Google called what another open source cfml distro called Open Blue Dragon was doing on the GAE as "awesome". He said it was the easiest way to get running on the GAE.
The best developers can do amazing things in a number of different languages.
That said, I'd argue that no, Microsoft did not kill MySpace. Generalizations like this are wrong. There are many more .NET enterprise developers out there then there are Ruby or Node or Python. With quantity comes a varying degree of ability. MySpace killed themselves by lowering their standards to the easy-to-find .NET developer instead of setting the bar higher. Once you lower the standard by which you hire developers, it's a cycle. The new guys will lower the standard a little more to hire the next set, etc.
The lesson to be learned here is if you can't find a good developer, don't blame the technology stack you've selected, blame your recruiter. Find the person you want, they are out there.
Bad products tend to die or get replaced by superior offerings. Thats the nature of business.
Not being able to innovate rapidly because of technical lock in is the only way these types of issues can "kill" a site. But its very hard to quantify these types of issues. Between this article and Kevin Rose's statements about hiring B&C level programming talent it seems like a lot of engineers are getting tossed under buses, for poor management decisions.
VB: Can you tell me a bit about what you learned in your time at Friendster?
JS: For me, it basically came down to failed execution on the technology side — we had millions of Friendster members begging us to get the site working faster so they could log in and spend hours social networking with their friends. I remember coming in to the office for months reading thousands of customer service emails telling us that if we didn’t get our site working better soon, they’d be ‘forced to join’ a new social networking site that had just launched called MySpace…the rest is history. To be fair to Friendster’s technology team at the time, they were on the forefront of many new scaling and database issues that web sites simply hadn’t had to deal with prior to Friendster. As is often the case, the early pioneer made critical mistakes that enabled later entrants to the market, MySpace, Facebook & Bebo to learn and excel. As a postscript to the story, it’s interesting to note that Kent Lindstrom (CEO of Friendster) and the rest of the team have done an outstanding job righting that ship.
But the board also lost sight of the task at hand, according to Kent Lindstrom, an early investor in Friendster and one of its first employees. As Friendster became more popular, its overwhelmed Web site became slower. Things would become so bad that a Friendster Web page took as long as 40 seconds to download. Yet, from where Mr. Lindstrom sat, technical difficulties proved too pedestrian for a board of this pedigree. The performance problems would come up, but the board devoted most of its time to talking about potential competitors and new features, such as the possibility of adding Internet phone services, or so-called voice over Internet protocol, or VoIP, to the site.
The stars would never sit back and say, ‘We really have to make this thing work,’ ” recalled Mr. Lindstrom, who is now president of Friendster. “They were talking about the next thing. Voice over Internet. Making Friendster work in different languages. Potential big advertising deals. Yet we didn’t solve the first basic problem: our site didn’t work.”
Why would you do that?
This is bread & butter algorithm analysis. Don't put O(n^2) analyses on your most-loaded page.
Others in the company begged to at least back it down to a count of people 2 degrees away; but the founder insisted that the magic was the 3-degree version. So instead the company pursued technical strategies to speed up the 3-degree count; don't know what those were precisely, but it seems that they were not pursued as zealously as they could be (due to VoIP, etc. distractions).
My understanding, by the way, is that the network size was computed on page-load until surprisingly very late, due to the perceived need for real-time. Even after it was cached, it was still computationally expensive, as your numbers were computed (roughly) every time your 3-degree network added a link.
In retrospect, A/B testing could have been used to test the executive vision. So although my first reaction upon hearing this story (years ago) was: "that's big-O insanity," now I think that it's just as much a story about willingness to subject vision to empirical data and performing clinical cost/benefit analysis (when appropriate).
Funnily enough, now I think about it, performance may be much more critical than periods of downtime.
MySpace's problem IMO wasn't technical at all. They built a service that focused on users most likely to move, and repelled those most likely to stick with a platform.
The comments state that there was no staging or test environment, no ability to roll back releases, refactoring was a dirty word (engineers wanted to refactor, but couldn't), principal on technical debt was never paid (only the interest in terms of hacks to make the site scale-- again, product organization prioritized new features).
The rest: location, technology choice aren't sufficient to "kill" a company: there are successful companies in LA, there are successful and agile companies using Microsoft stack (where appropriate-- see Loopt and StackOverflow/FogCreek as examples of companies using both FOSS and .NET). On the other hand, they're not optimal either: they aren't what engineers would choose themselves most of the time.
This indicates that the technology and location choice aren't the cause, they're symptoms of company management that doesn't understand building non-trivial Internet applications (what the optimal technology stack for one is; where, when and how to hire developers to build one) and yet maintains authority to make all technical decisions. Contrast this with Facebook, Google et al-- where every process (from corporate IT to production operations to HR and recruiting) is designed with needs of the engineering organization in mind: "Want two 24" monitors? No problem. Want Linux on your laptop? No problem. Want ssh access to production? No problem. Want to fly in a candidate to interview? No problem."
 I personally wouldn't touch Windows with a sixty foot pole, but speaking objectively C# >3.0 and (especially) F# are great languages.
 "They weren't talented": having interacted with some ex-MySpace engineers, this just isn't -- at least universally -- true. Indeed, here's a secret to hiring in an early (Facebook in 2005) startup: seek out great developers who work for crappy companies, people who have joined "safe bet, resume brand" companies like (post ~2003) Google aren't likely to join you until you've already become a "safe bet, resume brand" company (Facebook in 2008-Present).
> Want ssh access to production? No problem.
This makes me a little uneasy, I'm not sure everyone should have ssh access to the production server.
Every _developer_ should. No question about it. Sudo should given on an is-needed basis, but ultimately, as a developer I can screw up a lot more by simply writing bad code.
Simple philosophy: you build the software, you should be involved in running it (including carrying a pager). Amazon's CTO agrees: http://twitter.com/#!/Werner/status/50957550908223490
This, more nuanced point, I agree with.
Ironically, if you treat production as an alien land developers aren't allowed into (and have no transparency about), you're going to create an environment where developers completely ignore operational concerns like security (e.g., having no authentication mechanism on their own services, as it's presumed production is a "magically secured" environments where no one may connect to in any way).
For mitigating exposure to crackers, though, it makes sense to minimize the number of possible entry points someone could compromise in order to put malicious code on your production servers. The source control system (did they really not have a source control system!?) is a less vulnerable avenue than ssh, because presumably third-parties review what flows through source control before putting it on the server.
Ssh access doesn't have to come with privileges: main purpose of ssh access is to be able to run top, iostat, ps, strace/dtrace, grep log files and also verify that my service is configured correctly.
You are correct that code can be reviewed, but that isn't always the case nor is the reviewer omnipotent. In any case, with both code and ssh is there is a strong audit trail: an employer needs to make it clear which are fire-able offenses and which aren't.
For what it's worth, "give developers read-only ssh access to machines that don't contain sensitive customer data" works great for Google, Amazon (where it also comes with a pager, something I'm in favour of), LinkedIn (recently implemented-- this made my work much easier), parts of Yahoo and I'd be surprised if that isn't the case at Facebook. In other words, companies that are strongly oriented around UNIX/Linux (it's available as an option on developer desktops), which can afford to hire (and are able attract) strong developers and strong operations engineers and which are in the business of writing Internet applications.
My personal philosophy actually goes quite a bit beyond that: hire great, generalist engineers who are considerate and nice, give them root. Let them push some code without review, if they're confident their code won't cause damage. Review any tricky code, bug fixes, or mission critical components (e.g., the HA storage system, revenue loop components, UI changes). Roll back instantly if it trouble occurs (something you couldn't do at MySpace, apparently!).
If someone cracks your developer's development workstation, they can piggyback on that developer's access in order to insert malicious code into a commit, or in order to ssh into a production server and run a canned exploit of a local-root vulnerability. The first of the two leaves a strong audit trail, and may require a third party to sign off on it before going to production. The second probably doesn't, and won't.
If you can run strace on a process, you can inject malicious code into it.
While this is a theoretical consideration, I don't know of any security breaches due to this policy at the companies you list. On the other hand, there were security breaches at MySpace due to gross incompetence on the part of the developers — most of all, Samy is my hero!
I wasn't suggesting that developers themselves would be putting malicious code into production.
You are also forgetting that there is usually a step between a developer workstation and production, and at that gateway you'll typically have additional security measures (so that simply getting to the gateway doesn't get you to production).
I don't, however, disagree with your overall idea: yes, technically, developers having ssh access to production might (to a very small degree) reduce security, all else being equal. However, there countless benefits to giving developers ssh access that result in greater security.
Nor do you have to use the same policy for all machines: SOX, for example, mandates that developers that write the code that handles financial transactions shouldn't have access to machines that run this code (to prevent fraud). There are other types of machines I'd include in this case (databases holding sensitive user data, machines holding sensitive configuration, etc...). However, for a vanilla machine running an application server, or a database server holding strictly non-sensitive/non-revenue data, that's not the case.
There are also far worse mistakes one can make (e.g., don't use version control, don't put proper review procedures in place, hire/don't fire incompetent developers) which will impact security.
MySpace just gave way to much flexibility to the users to modify the look and feel of their pages that it just got way to busy and very difficult to look at.
In some respects I think it was MySpace's business proposition to allow users to create their own personal spaces on the web easily, whereas, Facebook's goal was more to connect you to your friends. In that sense MySpace followed through, although that follow through seemed to lead to their demise!
In short, it's wildly less-pleasant to use than Facebook.
Myspace had some of the most annoying ads on the web. Heaven forbid you tried to use the site without adblock.
I think that was the bigger problem. Had they continued to focus on improving the end-user experience rather than extracting every last bit of value, they might still be a viable competitor to facebook. The freedom to customize would be one of a very few features facebook could not easily copy.
One of the issues that stemmed from this was lack of respect for technology in the sense that no one at the higher levels saw the company as a technology company. They saw it as an entertainment or media company. That created the cultural problems on down that eventually contributed to bad products and shoddy implementation.
Now, the core technical part of the organization was actually extremely competent. MySpace was pushing more traffic than Google at one point in its heyday, and the site scaled just fine then. That wasn't an accident, I have worked with some of the smartest people in the industry there. But because tech wasn't the point for executives, those people were tightly controlled by non-technical management, and so products suffered.
MySpace could (and still can) scale anything, to say that they had a scaling problems by the time they got to their peak is complete gibberish. Over the years they have developed a very mature technology stack. No one knows about it because it's entirely proprietary. The problem was management and product that was basically... incompetent, and lacked anyone at the proper levels who would care to see and fix it.
EDIT: Some typos and missed words. I'm sure still missed some.
BTW, I'm not normally this animated with my comments, but the article was so full of such baseless conjecture I as truly appalled. I actually had a good deal respect for Scoble prior to reading that. MS had a ton of problems, but it definitely had a number of great people working on technology and doing a pretty good job at it - otherwise we would have been friendster.
Appalling, if true. (Not that good technology and process would have made the product suck much less.)
Once you start taking away the ability of devs to think for themselves or feel comfortable doing work it makes it harder to be motivated to come into work and fix the issues, and if management isn't listening to the complaints about the need to re-factor then what is the point? Adding hack onto hack gets boring pretty damn fast.
MySpace also lost in that they really didn't have a direction of where they were going (at least that is what it looks like looking in). Blogs, music, status updates, what was it supposed to be? And it didn't help that all over their web properties they didn't have a consistent look and feel because they allowed everyone and their mother to skin their profile page how they saw fit leaving it a disjointed mess that just made me hate the site more.
Definitely not a problem to fix their deploy problems on the .NET stack, I've put together automated deploys for Windows and with MSI they are a breeze. Yes, it's going to take a week or two to get the hang of WIX but after that the installer does all your dependency checks and you have a very repeatable process. If you stamp your MSIs with the build number it's even very easy to rollback.
This is just about the most monumentally stupid thing you can say, if you really don't like C# there are a dozen other languages available (like Ruby AND Python). If you're hiring people that can ONLY write code in one language then that should be a sign that you're not hiring the right people to begin with. They hired crap talent that happened to know C#
All this which stack scales best crap is cargo cult programming, you should recognize it as such. Most startups die because they have no customers, not because their servers are on fire from load.
It's too bad that all the tech built around .NET will be lost to the annals of MySpace, MSFT should acquire the company just to open source the whole thing for the benefit of .NET.
Regardless, it's fair to say starting a company on "the Microsoft Stack" today would reflect questionable judgement. Are there any recent ex-MSFT founders on it?
MSFT products are not inherently evil; they have some advantages for some types of projects. But a proprietary closed source stack always puts you at a disadvantage.
Worst case scenario with open source, you go patch what's holding you back in the open source. With bugs in MSFT products, you are at the mercy of MSFT to prioritize your issue. If you are a big enough fish, then they will pay attention. Otherwise, good luck.
I don't understand why anyone would willingly tie themselves to the Microsoft web dev stack as a startup. Even if you don't have to pay upfront, you will pay dearly in the future when you go to scale. At one startup I worked for we were hamstrung by not being able to afford the upgrade to Enterprise SQL Server, for example. So our data replication was tedious, time consuming and prone to failure.
White / Yellow / Green / Red fonts on black backgrounds with animated gifs + glitter and broken plugins will be the response to the question "What comes to your mind when you think of Myspace UI experience?"
In comparison, the facebook experience was a lot more fresh, clean and unified.
It was downright embarrassing to have a profile page on MySpace. Unless you wanted to spend an entire weekend customizing your page, it was going to look like a banner ad factory had exploded on your profile. I'm a web professional -- I can't have that as my public image.
Not only did MySpace look like an amateur web site from 1998, it was completely confusing to operate. What checkbox do I click on which page to turn off the flashing purple?
MySpace just had an inferior product, plain and simple.
I think the article has the right notions. Stack doesn't matter, a team of highly motivated devs who can milk the technology involved is more important.
The web tier has very little to do with scalability (don't get me wrong, it has a lot to do with cost, just not scalability, except in subtler ways like database connection pooling)--it's all about the data. When MySpace hit its exponential growth curve, there were few solutions, OSS or non OSS for scaling a Web 2.0 stype company (heavy reads, heavy writes, large amount of hot data exceeding memory of commodity caching hardware, which was 32 bit at the time, with extraordinarily expensive memory). No hadoop, no redis, memcached was just getting released and had extant issues. It's funny because today people ask me, "Why didn't you use, Technology X?" and I answer, "Well, it hadn't been conceived of then :)".
At the time, the only places that had grown to that scale were places like Yahoo, Google, EBay, Amazon, etc., and because they were on proprietary stacks, we read as many white papers as we could and went to as many get-togethers as we could to glean information. In the end, we wrote a distributed data tier, messaging system, etc. that handled a huge amount of load across multiple data centers. We partitioned the databases and wrote an etl tier to ship data from point A to point B and target the indices to the required workload. All of this was done under a massive load of hundreds of thousands of hits per second, most of which required access to many-to-many data structures. Many startups we worked with, Silicon Valley or not Silicon Valley, could not imagine scaling their stuff to that load--many vendors of data systems required many patches to their stuff before we could use it (if at all).
Times have changed--imagining scaling to MySpace's initial load is much easier now (almost pat). Key partitioned database tier, distributed asynchronous queues, big 64-bit servers for chat session, etc. But then you factor in that the system never goes offline--you need constant 24 hour access. When the whole system goes down, you lose a huge amount of money, as your database cache is gone, your middle tier cache is gone, etc. That's where the operations story comes in, wherein I could devote another bunch of paragraphs to the systems for monitoring, debugging, and imaging servers.
Of course there's the data story and the web code story. MySpace was an extraordinarily difficult platform to evolve on the web side. Part of that was a fragmentation of the user experience across the site, and a huge part of that was user-provided HTML. It was very difficult to do things without breaking peoples' experiences in subtle or not subtle ways. A lot of profile themes had images layed on top of images, with CSS that read, "table table table table...". Try changing the experience when you had to deal with millions of html variations. In that respect, we dug our own grave when it came to flexibility :).
Don't get me wrong, there were more flaws to the system than I can count. There was always something to do. But as someone who enjoys spending time on the Microsoft and OSS stacks, I can tell you it wasn't MS tech that was the problem, nor was it a lack of engineering talent. I am amazed and humbled at the quality of the people I worked next to to build out those systems.
Article being discussed: http://highscalability.com/blog/2011/3/25/did-the-microsoft-...
HackerNews Link this is a comment from: http://news.ycombinator.com/item?id=2369343
I have no idea why HackerNews has no context links built-in to their comment pages. O_o
Let the big karma fire begin.
edit: somewhere else someone mentioned they used Cold Fusion. I consider that another stupid decision. But at least they were migrating out of it.
My colleagues at Stack Overflow work faster and produce more -- at obsessively fast web scale -- than any team I have observed. I also see talented people struggle to produce a viable site using (say) Ruby on Rails.
Technology correlation? None. The correlation is in discipline, understanding the tools, foresight, priorities, management...
Think of it this way...how often have you seen a headline on HN bring a site to its knees? Fair guess that many of them are on "scalable" technologies.
Don't blame technology for your failings. Facebook won because it had a first name and second name field.
Hearing a blogger that has no idea what he's talking about make such generalizations as 1) There are no good c# developers and 2) There are no good developers outside the bay area shouldn't bother me but it does.
In their defence, what Facebook stumbled upon was really simple and yet very non-obvious (at least initially).
One key aspect of Myspace is how customizable it is. As any programmer can tell you, this limits the ways features can be rolled out.
For example, you want to have a new layout? Too bad. It will break the users customization.
You want to add a new button? Too bad. There is not a coherent place where you can add it.
You want ajax? How will that break users layouts?
What killed MySpace is poor management. It is one of those companies that still don't get that good engineers are as precious as good lawyers.
- step 2 : accuse the competency of developers to hide your own incompetency
- step 3 : fail
MS stack does not kill anyone. dumb management kills.
top level should be able to see the error and move, be it dumb layoffs or .net codebase. it's not like myspace was rocket science.