Hacker Newsnew | comments | show | ask | jobs | submit | aleem's comments login

I have been skeptical after a lot of Microsoft misses but the Surface Book Pro might just put Microsoft on the high road. Splitting up the hardware breaks new ground. If I understood it correctly, the GPU is in the keyboard which you can attach to get more power. In detached mode the screen itself has an i7 processor that's plenty powerful. So they managed to let you hot-plug the GPU while the OS is running?

> I have been skeptical after a lot of Microsoft misses

The Surface 3 and Surface Pro 3 weren't misses. They were extremely well received and fairly successful products.

Indeed, nobody makes knock-offs of unsuccessful products....

> So they managed to let you hot-plug the GPU while the OS is running?

Yes. And I think they said that with DirectX 12 it'll split the load between GPUs when both are connected.

I'm curious how this works in practice? I have a laptop with AMD integrated + dedicated graphics, and AMD's implementation with dual-graphics resulted in herky-jerky framerates (it seemed to alternate between the GPUs, so frames rendered with the integrated one took slightly longer than the dedicated one). I ended up telling the driver not to do that and just use the dedicated GPU (and splitting the work didn't really result in observable improvements anyway).

It's a DirectX 12 feature that software developers have to opt-in to, and it is up to the software developer to figure out how they plan to spread the workload across the GPUs, but they can program against any and every GPU on a device regardless of manufacturer now. (The fun thing here will be to see software handle hot drops as GPUs get attached/reattached with the Surface Book.)

Only if the application supports it.

> Splitting up the hardware breaks new ground.

I'm not saying you're incorrect (I think the Surface Book indeed appears innovative and useful), I just wanted to share the history of at least one product class -- Panda Project's Archistrat workstations and servers - that separates CPU, memory, and I/O into allegedly independent and upgradable subsystems connected by a passive backplane.

BYTE wrote about it, and the Internet Archive luckily saved the interesting preview:


As I searched for this article, I happened across Panda Project's patent for their backplane, published almost 17 years to the day:


It's probably just coincidence (I'm guessing MS wasn't waiting for this patent to expire before releasing the Surface Book), but it was a fun find.

Edit: "Pics or it didn't happen", courtesy Infoworld via Google Books:


> So they managed to let you hot-plug the GPU while the OS is running?

Is that news? I think I've seen similar setup with ExpressCard-based PCIe adapters for desktop graphics cards for a few years now. Thunderbolt also has had hotplug PCIe connections from the beginning. (Less elegant, of course, but the same engineering challenge.)

The seamless part is new, very new once a GPU is initiated it's locked, the fact that they can turn it on and off without a full reboot is quite amazing.

If you run a hypervisor setup with multiple GPU's you need to make sure that the UEFI and the hypervisor/main OS do not initialize them until they are passed through to the guest VM and after that you can't recycle them easily and usually need to reboot the host if you want to pass them through to another guest.

Is that any different from how Windows has restarted the GPU and reloaded the driver whenever a the video driver crashes since Windows 7? I think the new part might just be userland software being able to take advantage.

Even that's not new. I had a circa-2010 HP Envy laptop would switch from discrete to integrated graphics when you unplugged, and had a tray icon to manually switch between them without rebooting.


It's not the same as disconnecting the GPU, it would go into low power mode but not disconnect the GPU completely.

You have had that feature for quite a while, but it's not the same as reinitating the GPU completely from scratch.

AWS Lambda is great for inconsistent atomic workloads. However, I had a fairly disappointing experience with Lamda when I tested it just last week.

For example, you cannot send dynamic response headers using the AWS API Gateway (the complementary service to expose HTTP endpoints). In my case I wanted to change the mime-type depending on JSON vs JSONP response.

It's also not possible to connect Lambda directly to ElastiCache and mostly you are expected to work with S3 or DynamoDB (Amazon's proprietary JSON store and what was mostly responsible for the data outage recently in US East). ElastiCache would allow easy persistence which is why it's surprising it can't be connected to given that it's an AWS service (you can connect to it by creating an EC2 proxy but that would defeat the purpose of a serverless architecture).

Some other oddities were sniffing the response body to set HTTP headers as opposed to just allowing your Lambda function to set the HTTP header directly or parsing the JSON response as opposed to doing a regex match.

I've been playing around with API Gateway & Lambda a bit lately, and it definitely feels like these services are sometimes built by teams that don't talk to each other.

API Gateway tries really hard to HIDE things from you. For instance, you can't see what the requested URL was without using a fair bit of VTL to put it back together from some other variables. Any only lately can get you a full list of query parameters, without having to specify them at the time of API creation. In fact, it seems like most of the work on API Gateway, since it's release, has been to let end-users have more access to data they hid in the first place.

I don't know when I gleaned it but it was definitely a culmination of my time at Microsoft, that building a platform is one of those painfully obvious things that everyone should be doing. I wrote a rant about it in 2010[1] thinking I was just putting out the obvious for lay persons.

WordPress has some really horrible backward compatibility baggage but the largest reason for it's success is that it's a platform. Being a platform is as simple as having an API to let others build on top. The platform API is like a bag of seeds that lets everyone do their own gardening using your seeds. This builds an ecosystem.

Accessibility when it comes to Platform APIs is also best illustrated with a simple example and should be clearly obvious to anyone who used jQuery, Prototype and the ilk. At the time jQuery offered method chaining and which other JS frameworks did not. And it also offered some really simple and well-named convenience methods.

...returning 'this' in functions to allow method chaining was a light bulb moment for me after I saw how well jQuery did it. The jQuery plugin ecosystem stands on its own merit.

The great thing about building a developer platform is that it leads to an ecosystem of apps and plugins. And the great thing about an ecosystem is that it gets you venerable developer talent on a truly global scale.

I recall having a discussing many years back about why Microsoft enterprise licenses were mostly available only through certified partners. It struck me at the time that Microsoft was handing out 15%+ commissions for no reason while it could have adopted Apple's direct selling model. It turns out Microsoft had essentially build a retail platform where being a Gold partner unlocked top margins. You could find Microsoft certified partners everywhere. And this built an ecosystem. These certified partners were cross-selling and up-selling Microsoft products to no end, layering their own support on top and pushing for 3-year release cycles so the stream of money would get renewed vigour.

[1]: http://aleembawany.com/2010/08/23/platforms-strategy/


It's a neat idea, however, I think this sort of an effort can more effective if open sourced. In fact, it's probably better you open-source it before someone else comes along and does it. To clarify, I am not talking about the UX but only the lists themselves.

If you do go this route, my 2 cents are two keep it JSON formatted and maybe add a severity flag to each word for words like "git" which aren't so bad if you product is targeting non-english audiences while words like "fuck" are really bad irrespective of the audience you are targeting. Taken in conjunction with the population size of the language, this could generate a good score for word safety.


It's wonderful that CSS problems have been distilled so clearly, it's been a long time coming. Radium[1] is really worth checking out, it's simple and clear.

From the article:

  /* BEM */
  .normal { /* all styles for Normal */ }
  .button--disabled { /* overrides for Disabled */ }
  .button--error { /* overrides for Error */ }
  .button--in-progress { /* overrides for In Progress */

  .normal { /* all styles for Normal */ }
  .disabled { /* all styles for Disabled */ }
  .error { /* all styles for Error */ }
  .inProgress { /* all styles for In Progress */

  > In CSS Modules each class should have all the styles needed for that variant
This seems like trading one set of problems for another. It violates DRY and consequently impacts code maintainability. The corollary is that .disabled should @include .normal and then define the overrides so you only have to define the base styles once in .normal... but that also requires discipline and code wrangling.

  > [BEM] requires an awful lot of cognitive effort around naming discipline.
But the proposed alternative is not too different either:

  /* components/submit-button.css */
  .error { ... }
  .inProgress { ... }
This example just replaces BEM naming and namespacing with file naming and directory structure.

[1]: https://github.com/FormidableLabs/radium

EDIT: On second reading, the stuff in there is definitely not "Welcome to the future" exploring the "adjacent possible" realm--those are quite grandoise pretexts to a solution that is just as unwieldy.


Regarding your first point, he addresses that in the next section:

"The composes keyword says that .normal includes all the styles from .common, much like the @extends keyword in Sass. But while Sass rewrites your CSS selectors to make that happen, CSS Modules changes which classes are exported to JavaScript."

And isn't file (module) naming and directory structure what we use to organize regular code? I like the idea of going to the components folder and looking for the 'submit button' file when I need to alter something much more than searching a 20,000 line css file or randomly split scss files. Even nicer is the concept of including your styles in the same directory as the components they define, further mirroring normal code organization. It seems like this would allow for much easier onboarding into a project.


I am not saying it's a bad idea. However, it trades one set of problems for another. What point is there in highlighting a problem and then proposing a remedy which introduces the same problems in a different form?

> I like the idea of going to the components folder and looking for the 'submit button'

All existing frameworks use the same approach (Bootstrap, Foundation, et al). GetMDL.io for example uses BEM and clean files to organize code https://github.com/google/material-design-lite/tree/master/s...

Dealing with that on any reasonably sized project still requires "naming discipline" and "cognitive effort", both of which the author highlights as problems ailing BEM with a remedy that still suffer from the same. It didn't read very convincingly.


> The corollary is that .disabled should @include .normal and then define the overrides so you only have to define the base styles once in .normal... but that also requires discipline and code wrangling.

If I'm using .btn-disabled everywhere, then its hard to be to write a CSS rule that says 'whenever a button follows another button, apply this style to it'. That's why we have this competition with BEM and `.btn .disabled`.

> This example just replaces BEM naming and namespacing with file naming and directory structure.

...Yes. That's the point.

The problem is that CSS is a global 'language', and everything that you write has the chance to impact all your other CSS. When I'm writing a particular component, there's no way to write CSS that's local to just that component.

We have things like BEM that is about establishing a _convention_ to write Pseudo-Local-CSS, but nothing is enforced. Everything is still global. 'CSS Modules' enforces your CSS to be 'local only' as a file-level, just like regular (Node) JS.

It's like how there's a convention of not adding a string to an integer, but adding a type system to your programming language enforces that on a technical, contractual level.


Except as far as I can tell, with Radium you get zero support for pseudo classes and media queries until after the client-side JavaScript has downloaded and been initialised. This is acceptable if it's purely a client-side app, but if you're using an kind of server-side rendering it's clearly not good enough.

I really don't want a lack of hover or focus effects on initial page load, and I really don't want my entire page layout to change once the JavaScript-powered media queries kick in.


Radium is simple, for that I like it. Complexity comes with a price and there may be alternatives but Radium leverages JS expressions which I like. Quoting from the article again:

  /* components/submit-button.jsx */
  import { Component } from 'react';
  import styles from './submit-button.css';
  export default class SubmitButton extends Component {
    render() {
      let className, text = "Submit"
      if (this.props.store.submissionInProgress) {
        className = styles.inProgress
        text = "Processing..."
      } else if (this.props.store.errorOccurred) {
        className = styles.error
      } else if (!this.props.form.valid) {
        className = styles.disabled
      } else {
        className = styles.normal
      return <button className={className}>{text}</button>

That looks unwieldy -- 4 if/else clauses just to deal with CSS. Contrast that with the example in "Usage" over at the Radium page https://github.com/FormidableLabs/radium -- much simpler.


If we were to mirror the approach of the Radium example, we'd actually end up with something more like this:

  import React from 'React';
  import styles from './SubmitButton.css';

  class SubmitButton extends Component {
    defaultProps = {
      kind: 'default'
    render() {
      const { kind } = this.props;
      const text = kind === 'in-progress' ? 'Processing...' : 'Submit';
      return <button className={styles[kind]}>{text}</button>


A lot of the latest thinking in (non-JS) CSS suggests that violating DRY is an excellent idea. Copy and pasting CSS rules is cheap, but maintaining and adapting a large complex CSS project that's blowing up in a team's face can be very expensive.


Bears mention here, Spectacle[1] is a general purpose window manager which offers the same features using keyboard shortcuts for any OSX window.

[1]: https://github.com/eczarny/spectacle


> These companies are basically hogging it because they were able to build the user base

I am all for liberating data and letting startups drink out of the firehose but I have some cognitive dissonance from reading this news.

I know that OLX spends tens of millions of dollars in India and nearby regions to solve the marketplace problem: get a critical mass of buyers and sellers to achieve escape velocity and enjoy growth through network effects[1]. So it's not just that these companies "happened" to build these user bases, they spent money and took early gambles.

This could very well spell the beginning of the end for much of Craigslist's real estate listings (followed by other categories inevitably) unless they have some grand plan to overhaul their UI/UX entirely. Kijiji, OLX, Gumtree are also vulnerable. Maybe even Twitter since it has a habit of shutting down startups built around its feeds.

What should one do if they are at the helm of CL or one of these other companies?

    [1]: https://en.wikipedia.org/wiki/Network_effect


Got me excited there for a minute until I checked the price. I can't help but think this is way over-priced. The 6" DIY kit from Visionect is ~$600.

The latest 6" Kindle with WiFi is ~$80 (on Prime Day it was going for ~$50). Given that it has WiFi and there are some Jailbreaks for Kindle you could get a Kindle to do this same thing. If you want better contrast you could upgrade to the Kindle Paperwhite for $120. That's anywhere from 8% to 20% of the cost of the Visionect system.

EDIT: Turns out the Kindle can do quite a bit including running a web server, web browser, SSH and much more so this should be easily doable.

[1]: http://www.mobileread.com/forums/showthread.php?t=128704


I don't really see the equivalence. The dev kit may be overpriced, but the price of a kindle isn't really a good indicator, to me. They seem like very different products.

How long do you think a kindle would last out in the elements? What is the battery like? What is the I/O like? Security (physical and software)? Is the screen contrast similar? SDK support? Customizability? 60601 ? All of these things are potentially expensive to provide.

Anyway, it may not be the right dev kit for your applications, but the fact that they share a display technology is a tiny piece of the puzzle. As you note, for you, the kindle might do a lot more for less cost (in which case, why not just use one?)

In my experience, looking at mass produced consumer electronics is a pretty terrible way to spitball the cost of anything that isn't mass produced consumer electronics.


The DIY kit cost is sort of irrelevant if you're making a product. It's the large quantity cost that sits at the top of the BOM.

It's very common for these companies to sell dev kits at high cost because the volume is tiny and the overhead of filling the order for them is large. This is nothing new.


You do know that the Kindle is subsidized by book sales, right? That and the fact that Amazon has both economies of scale and a zero-profit/market-domination strategy.

The Visionect kit is cheaper, more capable and better supported than comparable dev kits.


The Kindle might be subsidized in a general sense, but Amazon does not sell the devices at a loss. The base hardware cost is roughly equal to the selling price.[0] I would expect something similar from a barebones developer kit.

[0] https://www.mainstreet.com/article/exclusive-amazon-s-79-kin...


If you haven't already, you should be betting on React. It's the future:

> More importantly, this paves the way to writing components that can be shared between the web version of React and React Native. This isn't yet easily possible, but we intend to make this easy in a future version so you can share React code between your website and native apps.

Isn't this what we have all been waiting for? Writing components that are composable, that are "isomorphic or universal", that can run on Native or DOM without any downsides of it's predecessors.

Aside, the article also mentions a migration tool [1] to help you port your code from 0.13 to 0.14. While others may have done so before, it's pretty cool that this is streamlined as part of the beta release.

[1]: https://www.npmjs.com/package/react-codemod


> It's the future

I've heard that too many times to not be cynical about it.


I've leaned to stop caring about fads until they become a reality in project requirements.


It's the future



For CircleCi, it's the present. Their frontend is written in Om, a cljs wrapper around React.




:) Did you see this one? https://medium.com/@boopathi/it-s-the-future-7a4207e028c2


... like HN in a nutshell.


Cross platform tools always make compromises. I’d rather write native code than have a framework that does it for me.


I'm not convinced that's true.

I do agree to the extent that things like PhoneGap do result in compromises. But React Native is basically a templating tool for native apps – you're still free to write whatever native components you require, but can compose these using React, which is pretty cool. You can then abstract the higher-level components across platforms.


Why would I want to compose native using React?


Probably nothing for the regular mobile-dev. But many of the web-devs can leverage their JS knowledge and hit the mobile-market without "web-container-apps".


Well, that is the point actually. React doesn't put itself as a "write once, run everywhere" kind of solution. What's being offered here is better interoperability between native and web components. The core philosophy is still "learn once, write everywhere".


I would say that React is the present but that the future is a circle back to progressive enhancement. I'm seeing more high-profile JS developers jumping off the front-end framework bandwagon and back towards progressively enhancing existing HTML. I think we're going to see frameworks emerge which make progressive enhancement the priority and make it easier. So the whole JS templating engine race that has been going on for the last few years will simply not be important any more.


I feel like React - or something like it - is part of the path back to progressive enhancement, as you can use the same components to generate the initial HTML on the server and to enhance it when JavaScript is available on the client.

As you say, React is the present, and it's making progressive enhancement easier right now.


Sure, it's the future but currently it's still quite far away, React Native is available only for iOS (and in a quite beta state) and the components are not fully sharable between both of them yet. But yes, I agree, I can't wait for this !


It's the javascript future. How long will it last, really ?


> It's the future.

It's the now. Elm is the future elm-lang.org


If you haven't already, you should be betting on React. It's the future

But last week the future was AngularJS, and the week before that it was Ember or Backbone or Knockout, and a few days earlier it was jQuery. Next week it will be Angular 2.0, maybe, or something with Web Components, and the week after that we're all abandoning JS in favour of some declarative functional language that hasn't been written yet but will be on TodoMVC tomorrow.

Seriously, this is a beta of a pre-release package. The article mentions, as motivation for the major architectural change, some other react-something packages, and literally in the opening section of their README.md they say things like "This project is a work-in-progress. Though much of the code is in production on flipboard.com, the React canvas bindings are relatively new and the API is subject to change."

By all means let's experiment with newer and potentially better ways to build sites and apps, but betting heavily on an ecosystem this immature for a long-term professional project seems like asking for trouble.

Isn't this what we have all been waiting for?

It's certainly not something I've been waiting for, as someone who works in the relevant fields. Rather like "isomorphic" libraries, I feel combined web/native generation is mostly a solution in search of a problem. No doubt a few people really do have that problem, but I'm guessing that for most day-to-day work in the real world this theoretical flexibility adds little practical value. There's no particular reason the same tools should be good choices for writing both web and native apps, any more than we should expect the same tools to necessarily be good choices for writing both server-side and client-side code in a web app.

In this particular case, I'm more concerned by the change in emphasis than the technical changes. To me, the big advantage of React over most other front-end libraries and frameworks is the efficient DOM updating, which in turn makes this kind of component model viable with acceptable performance where various other UI libraries/frameworks have struggled in the past. However, it looks like the React team think that is a secondary benefit and the real advantage is using components.

I can already build a complicated web app rendering layer using the same modular design skills I've spent the last 30 years learning in other programming contexts. After all, that's what I did before we had the modern generation of template/component driven frameworks, when we had to update the DOM manually using plain JS or libraries like jQuery. So did plenty of other people. Maybe I'm missing something, but while React might be a sensible enough choice for part of the UI rendering work, I don't see why it has anything unique or special to offer compared to many other libraries here.

If the React team focussed on the efficient DOM updates, flexible but reasonably simple component model, and then stability so the community could develop the ecosystem around a solid foundation, I could see it becoming a good choice for projects that can't afford to have their dependencies constantly shifting around and the maintenance overheads that incurs. But this separation and increased emphasis on react-native and the like feels like it's leaving behind the very things that made React attractive in a very crowded field of JS UI libraries.


Well, jQuery has been the future for a very short time, and has been the "present" for years now.


I just copied that quote so I could comment on it. I appreciate the React team's vision.


Seriously: do you think it can revert the "native mobile apps is the only future" trend?


Since React Native has the UI performance of native apps with the ease of development of universal apps (and a minor performance degradation when it comes to logic), it could become a very important player very quickly.


  > Isn't this what we have all been waiting for?
God no. It's just backwards.


Some good pointers and links here, surprisingly they miss both my favourite approaches.

1. If it's on Github, find an issue that seems up your alley and check the commits against it. Or the commit log in general for some interesting commits. I often use this approach to guide other devs to implement a new feature using nothing more than a previous commit or issue as a reference and starting point.

2. Unit tests are a great way to get jump started. It functions as a comprehensive examples reference--having both simple and complex examples and workflows. Not only will it contain API examples but it will also let you use experiment with the library using the unit test code as a sandbox.



Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact