What happened to those concept cars that were encased in rubber/(or some other bouncy material) so when they bumped against each other, they would harmlessly bounce off ? I distinctly remember reading about them in Wired some years back.
Any amount of rubber would only work up to a certain speed. And all modern cars are designed to provide 100% safety to passangers up to 30mph - crumple zones, multiple airbags, engine block sliding underneath the body of the vehicle, early collision warning, automatic breaking and so on. Encasing cars in rubber would not improve anything above 30mph, while I am fairly certain would increase the weight of the vehicle - making it consume more fuel and also heavier vehicles take longer to stop,so the benefit might actually be null.
Yeah, wired runs lots of stories about stuff that quite simply never happens.
I used to keep a lot of back issues of wired, which I saved because I thought they had some cool stories. Surprisingly, many are packed with examples of very obvious vapor ware, and have aged really poorly.
A small handful, however, still read like notes from an exciting future, and I find myself returning to them, when searching for inspiration.
Wow, what a crazy thread! They are going about picking nutrients the way I would go about picking some web framework. Hey lets pick Rails what dynamic languages no way ok how about Servlets no way dude no semicolons for me ok then how about Play dude its like so boilerplatey ok fine how about scalatra fine scalatra it is. There's no thought that some human is actually going to put this thing in his mouth, what effects its going to have on his health and well-being. Nutrition as a Service.
I think debate is going to be present in the creation of anything, even amongst those with experience, so that part doesn't bother me too much.
However, to me Soylent feels as if a bunch of business experts with no software experience sat down to have a detailed debate with an open internet forum about what design decisions should be made in the construction of an electronic banking system or FEA modeling software, the whole way using only Wikipedia as a resource and without consulting anyone with past experience building such a system.
Actually, Soylent's development process reminds me a bit of many Bitcoin products: overconfident hype-prone attempts to design high-value systems with limited domain knowledge and a certain disdain for regulation, authority, and expertise.
If I weren't opposed to giving them money I certainly wouldn't be opposed to trying Soylent and perhaps using it as a meal-replacement shake at times. After all, the stuff is almost certainly more nutritionally valuable than a lot of the things I eat. But as a complete meal replacement, I feel things need to be pretty damn right and I don't think Soylent have the domain experience or testing to market their product as such.
>Were you expecting a magical revelation from the gods of nutrition?
Unlike subjective ad-hoc-cy garbage like web/js frameworks, nutrition is a sound science well studied for hundreds of years. We know which people in which climes eat which food groups in what proportion, how they fare relative to each other, impact of food groups on your bile/blood/bowels , ...
There are, as you choose to call them, "gods of nutrition", with PhDs and years of experience studying the damn thing in their labs. To randomly pick brown rice over whey because "hey I don't want to support industrialized cattle farmers"...that would be like my toddler picking candy over broccoli because it just tastes better! This is supposed to be an MRP that real people eat, and hopefully has no adverse effects on them. Its not some random 8080 server that serves json whether you code it up with or without semicolons.
Wow, I had no idea its that bad. I have never worked in India, but my Indian friends describe a similar environment in the Indian public sector - jobs for life, generally no job-hopping, no worries about money, company homes, company schools, company hospital, servants provided for, essentially just punch the clock, pretend to do some work & go home.
I wonder how the salaryman culture gels with the Japanese being so innovative. In Rising Sun (both the bad movie & worse book by same name), Michael Crichton make the claim that the Japanese are light years ahead of the US viz-a-viz computing/tech/optics/physics. There's a scene where Sean Connery walks into a physics lecture hall in ucla & points out that all the students are Asian while Americans are missing because they avoid hard subjects. If they are all boring homogenous salarymen, what's the impetus for being so innovative ? Any innovation by definition rocks the boat, so why do that ?
"I wonder how the salaryman culture gels with the Japanese being so innovative."
But that was answered above:
"You presided over a corporate bankruptcy? Oh dear. That's even worse than a personal bankruptcy. Not only did you fail personally, you ruined the lives of people to whom you owed society's highest form of loyalty."
They are highly motivated to engage in enough innovation so as to stave off bankruptcy.
As to Japanese companies leading in innovation and successful business/manufacturing/etc practices, I will defer to those who are more familiar, but I believe a big part of it is rigorous frameworks that look at the situation, take a long (LONG) term view, and methodically remove failure points and find ways to reliably achieve success.
Patrick has discussed this a number of times, here is a representative quote: "Keep doing this whole "Learn from failure and overcome it" thing for a few decades and you end up the biggest car company in the world and make the competition look like sniveling amateurs."
If you want to learn more, look up "Kanban," "Kaizen" (continuous improvement), "5 Whys," or search HN for "patio11 toyota."
The notion of area has confused hordes of scientists; it is time to retire it from common use and replace it with the more effective one of circumference. Area should be left to mathematicians, topologists and developers selling real estate. There is no scientific reason to use it in statistical investigations in the age of the computer, as it does more harm than good.
Say someone just asked you to measure the area of a circle with radius pi. The area is exactly 31. But how do you do it?
Do you pack the circle with n people, count them up and verify n == 31 ? Or do you pour a red liquid into the circle and fill it up, then drain it and measure the amount of red ? For there are serious differences between the two methods.
If instead, you were asked to measure the circumference of a circle with radius pi.
scala> math.round(2 * math.Pi * math.Pi).toInt
res2: Int = 20
You just ask an able-bodied man, perhaps an unemployed migrant, to walk around this circle while another man, an upstanding Stanford sophomore, starts walking from Stanford to meet his maker, I mean VC, well its the same thing...
So by the time the migrant finishes walking around the circle, our upstanding Stanford entrepreneur is greeting the VC on the tarmac of the San Francisco International Airport. This leads one to rightfully believe that the circumference of the circle of radius pi is exactly the distance from Stanford to the SF Airport ie. 20 miles. It corresponds to "real life" much better than the first—and to reality. In fact, whenever people make decisions after being supplied with the area, they act as if it were the distance from their university to the airport.
It is all due to a historical accident: in 250BC, the Greek mathematician Archimedes introduced Prop 2, the Prevention of Farm Cruelty Act ( http://en.wikipedia.org/wiki/California_Proposition_2_(2008) ). No I believe this was a different Prop 2. This Prop 2 states that the area of a circle is to the square on its diameter as 11 to 14 (http://en.wikipedia.org/wiki/Measurement_of_a_Circle ) .The confusion started then: people thought it meant areas had to do with being cruel to farm animals. But it is not just journalists who fall for the mistake: I recall seeing official documents from the department of data scientists, which found that a high number of data scientists (many with PhDs) also get confused in real life.
It all comes from bad terminology for something non-intuitive. Despite this confusion, Archimedes persisted in the folly by drawing circles in the sand, an infantile persuasion, surely. When the Romans waged war, Archimedes was still computing the area of the circle. The Roman soldier asked him to step outside, but Archimedes exclaimed "Do not disturb my circles!" (http://en.wikipedia.org/wiki/Noli_turbare_circulos_meos)
He was rightfully executed by the soldier for this grievous offense. It is sad that such a minor mathematician can lead to so much confusion: our scientific tools are way too far ahead of our casual intuitions, which starts to be a problem with a mad Greek. So I close with a statement by famed rapper Sir Joey Bada$$, extolling the virtues of the circumference: "So I keep my circumference of deep fried friends like dumplings, But fuck that nigga we munching, we hungry." (http://rapgenius.com/1931938/Joey-bada-hilary-swank/So-i-kee...)
Except the difference here is that generally Taleb is right and has a point. You know the b difference between the probabilistic hacker news titles generator and the actual titles found here are that even though they sound the same and have like weight same style. .. one is actually real.
Exactly this. If you are a PM at a Valley outfit & can't code right now, your team simply pretends to respect you, while going off & doing its own thing. Its just one of those empirical things. Doesn't matter if you "used to code" once upon a time. Things are moving at a breakneck pace & if you don't keep up, you are really going to get left behind.
I will also say that if you are a PM at a non-Valley outfit like a bank or one of those outsourcing firms, the exact opposite holds true. You should probably not code & spend all of your time managing. IBs especially frown at MDs & VPs taking on hands-on roles instead of managing & filling out gantt charts. My Manager at Goldman used to write compilers using javacc to automate risk algebras - he regularly got the stink eye. He used to tell me "they just want me to run the spreadsheets" :)
A manager is responsible for controlling, administering and maintaining the work being done by their subordinates. A big chunk of that is determining what work is yet to be done, if it's on time, what needs to be done to support the existing work, communicating with other teams, doing research, quashing harmful discourse and building morale. They organize, prioritize and double-check the tasks assigned and meet with their subordinates to ensure everything is going smoothly. They plan, develop, monitor, communicate, and assess their employees and their work. And of course they attend countless, constant meetings.
If as a manager you can do all that and then have 2.5-3.5 hours a day left to write code, bravo.
Do you have a citation for this? I've had some stretches in my career where I went a long time without doing any significant coding, and have always been able to ramp back up to speed quickly when needed.
Both my parents used to code (my dad on punchcards and my mom on a massive mainframe). It took significant time for either of them to pick up HTML for side projects.
Even in my own life, I was a developer from 10 to 18 (working 14 to 18, school 10-14) then I took about 7 years off to go do physical sciences. While I retained the basics like for loops and function calls, I totally lost a good portion of the rest of it.
At least in your case, you not only stopped coding but you stopped being around code or dealing with code. That is very different from not coding for 30%. Good chances that if you coded say 5-10% a week and spent the remaining time thinking about code architecture or code reviews, you'd be in a much different position.
No, but I've been through 8 years of religious education and can hold an intelligent conversation about why evolution and the Big Bang are the same conclusions I would reach as a scientist, so I get a bit miffed when people confuse my faith for irrationality.
I can't help but notice that the vast majority of commenters are missing the fundamental reason for all this handwringing. The reason isn't language schism (that too, but I'd attach a much smaller weight to it) so much as the schism between corporatism & academia.
Corporate America wants languages that are dumb, easy for corporate drones to assimilate, hard to mess up with, verbose ( verbosity is misinterpreted as documentation in the enterprise) and Java fits the bill from the get-go.
So Java became that. Rather than go down the R&D route, the pipeline became more corporate America oriented.
Haskell has always had very deep roots in academia - the focus on enabling the corporate American drone with Haskell is minimal, close to zero. So what do you expect ? The focus will be on debating types & building theorem provers, not on "how to show big bank's fourth quarter report in 3 colors with 7 columns and subtotals in single widget while data-binding seamlessly occurs in middle tier using enterprise beans built from beanfactoryfactory" - this sort of shit that is commonplace in the Java EE world I inhabited post Sun, would be considered positively gauche in Haskell.
Totally! Djikstra was very aware of this influence and pokes only somewhat subtly at it here. Djikstra believed very strongly in inventing the next group of well-armed intelligent computer scientists—scientists tackling the hardest problems humans have yet discovered by his view. His goals are vastly maligned with what corporate America wants and the choice of language is a tool in the war between those interests.
Exactly. Tooling is extremely political. Do you want to use tools with a low skill ceiling? Are you a control freak who worries about the all of the potential catastrophes that unskilled coworkers could rain down on you?
If you think that kind of work is retarded then don't get fooled by the languages and tools. A lot of the web stuff is just the same as VB. Just because its ruby or clojure doesn't take away from the fact that its just CRUD. Try and get into an interesting domain and don't worry too much about the language they use. Domain, domain, domain.
Don't get too discouraged by its corporate cooties. It's true that the language doesn't have a lot of sex appeal (no dialect of BASIC ever does), but it actually has a lot of cool stuff that's worth exploring. All told, you're actually pretty fortunate to be getting taught in one of the few languages a you might see in a high school curriculum that has decent support for interesting things like functional and asynchronous programming. All in all, VB.NET can take you a lot further than you might expect from its reputation among people who don't know it very well or are still sore about losing VB6.
For whatever reason, pomdp has been co-opted by the AI community, so the books are pretty lousy and show-offy, often using nonstandard terms and newfangled jargon to explain what is really plain old probability theory with some state space transitions.Thrun's Probabilistic Robotics is the best one I could find. Any math professor can run circles around the material if it were treated as part of math instead of bringing AI into the mix.
And to be fair, so far they've been right, no robotic AI remotely similar to this scenario is visible as even distant blip on the horizon.
Their point is entirely fair that robots would have to do deal with a continuously ambiguous world and would lack anything this fable's "general good purpose" module for resolving the ambiguity problems in a touchy-feely way. Of course, the complexity of human interaction wouldn't appear suddenly in a moment of interaction with one drug addict but that would hit and crush any "real world" AI the moment it tried to get out the door.
A real world AI would almost certainly have to learn the rules of it's environment rather than being hard coded with arbitrary human designed rules. Machine learning is getting better and better at doing this.
If we ever got them as intelligent as the robots in this story, we'd have no way of programming them with abstract high level goals (i.e. "do good", or "don't hurt humans", etc.) Except by giving them examples of robots hurting people and robots not hurting people and hoping they infer the pattern we want them to from it.
This is an (extremely) simplified argument for the dangers of AI.
The only "real world AI" example we have is us, human beings ourselves.
Humans manage to be able to both learn from their environment and to learn by being told rules - a person would have a hard time demonstrating intelligence if they weren't able to be instructed in things and so it seems like anything intelligent we construct would have to have those abilities too.
I suppose it's a natural overreaction for people to believe that if intelligence is not just rule-following, it must be not at all rule based. I believe the truth is in the middle.
An intelligence that smart would likely understand what you are saying and what you want. That doesn't mean the AI would want to do what you tell it to do though.
Comparing it to humans, if you tell a human you want them to do something it doesn't mean they will do it even though they understand you.
If we train the AI the same way we do today, it would involve giving it examples of robots doing what they are told and robots failing to do that. That approach would likely fail because of all the possible ambiguities involved in interpreting meaning.
Other approaches, like giving a robot a reward every time it does something right and punishment every time it does something wrong, might result in the robot killing it's master and stealing it's reward/punish button.
Back when I was a really dumb undergrad, I did some work for a company & they asked me where to mail the cheque. I gave them the University address. So I was chatting with my professor & we walk by the mailboxes, and by pure reflex, I reach out into my mailbox & grab my mail & he does the same & we say our goodbyes & go home.
Now, I open the mail at home & am staring at my professor's salary! You see, the secretary had switched our mails by mistake because our last names began with the same letter. I was quite stunned by the number - it was a measly sum, and I did the math & worked out that the Professor's salary was about 60K. Now, I knew my Professor was an important CS scholar & had tons of papers to his name, but that low number irked me. After the PhD and all these papers, just 60K...why..?
At the same time, my Professor had also gone home & opened his mail & was staring at my salary! So much money for some dumb undergrad who was basically an average student & had no major publications or research! He was quite bothered.
The next day, we had a very awkward exchange of mails. But from then on, the student-teacher dynamic completely changed. I suddenly began getting B's instead of C's & even occasional A-. He probably felt, hey if this guy can get so much money in the market, he probably knows his shit. otoh, I began to respect him & the CS program less & less. So I still have to spend 3 more years & take 45 more credits & do the qualifiers to get the PhD & then write all these papers & for what..60K ? That was my attitude at the time.
Needless to say, I dropped out of the PhD pgm with a Masters & went to work full-time. That was the stupidest thing I ever did, but I just didn't know it then. Now, I look back & think...hey if I hadn't known about his salary, I'd have slogged it through & actually gotten my PhD instead of half-assing it out here :(