Hacker News new | comments | show | ask | jobs | submit | fidotron's comments login

The big problem in robotics is not sensing, electronics, or logic, but mechanics and integration. Just compare the state of the art with our muscles, or the highly optimized integrations of sensors, logic and actuators that exist in the insect world (many of which in isolation we still haven't reproduced) with anything coming out of a 3D printer and it should be obvious we remain far further from the conditions needed for explosive growth than many appreciate.


I disagree. There are plenty of examples of robot hardware that would work great for valuable applications, but the software and sensing isn't there yet.

It's true that most research robots have crappy mechanics and integration. That's because they know that software is the unsolved problem, and hardware is mainly a matter of throwing resources at it. That can wait until the rest of the technology is ready.

Check out a video of IPI's truck unloading robot: https://www.youtube.com/watch?v=Plo7SH9aBgg That hardware more or less existed in the 90s. With the right software, two of those (one throwing, one catching) could unload a truck way faster than humans. But watching it, 90% of the time is sensing and thinking.

Or this humanoid: https://news.ycombinator.com/item?id=9673386 (video sped up 10x). The hardware could indeed be optimized, but it's good enough that it's not the limiting factor. With sufficiently smart software, that could be the robot butler of the future. If you put automotive-scale engineering resources behind it, the hardware would be slick, lightweight, and rugged.


Those robots, compared to biology, are utterly unimpressive. You aren't going to get a robot butler before you have a robot honeybee, but where is the robot honeybee? It's mechanically limited out of existence.

Software etc. are simpler nearer to useful than our hardware is, but it's the hardware parts that are missing to make use of current software.


The wing of a plane is incredibly simple compared to the wing of a bird. No muscles, no bones, no feathers - it doesn't even flap. And yet it gets the job done.

Nature has a lot of path-dependence that can be avoided with design. Think of how much cruft accumulates in a software program after 15 years of continuous evolution, and then multiply that by a million. It's not unreasonable to think that whatever hardware we use to emulate various human tasks might be a lot simpler than the wetware that makes up an actual human.


It's true that robot honeybees will require super-advanced hardware, because the job of a honeybee is stupendously difficult. Long range, weather, precision landings, surviving collisions, etc. That's a long way off.

Robot butlers won't need very advanced hardware, because we've designed homes to be comfortable and benign. Comfortable temperatures, power available, utensils and appliances designed for ease of use. Same with logistics.

So yes, we're a long way from humanoid hardware that could compete in a rodeo, or win a soccer match, or survive in the wild. But butlering is physically not too demanding.


You are correct, actuators are a huge problem and there isn't as much being done to fix it. Right now one of the biggest problems is that we don't have efficient torque dense electric actuators, that is actuators that can produce large torques and don't weigh much. So you end up with robots that are on a per weight basis weaker than we are.

Actuators also don't control what we want them to control, we really need to be able to control actuator force/torque(or even better impedance/stiffness), but most actuators only control position. In fact, the entire reason the Baxter robot is able to work with humans(and not need an expensive cage) is that it has force controlled compliant actuators.


What's amazing about Google is how problematic the quest for new business models is for them in spite of their advantages, and they are almost cursed by their success in one domain polluting the others.

But their really large problem is they're too smart and strategic for their own good. If you look at the ecosystems they construct the only long term winner in them is Google, and other companies are now highly aware of this.


>> only long term winner in them is Google, and other companies are now highly aware of this.

Actually Samsung did very well on android, probably much better than if Google hasn't shared android with them.

Ass for small companies working on their ecosystems(app authors, youtube creators, site owners),those are decent ecosystems, and generally Google is not worse than any other large ecosystem owner.


Samsung clearly do not share that opinion, and justifiably so.

For other partners, for example Chromebook makers or YouTube, Google is worse than others for the simple reason they work to make everyone else they're working with interchangeable with someone else they're working with, resulting in near perfect competition with them as the gatekeepers. Today's YouTube stars have brands, but they are second in the customer relationship position behind YouTube. The likes of Apple certainly do get themselves in the mix here, but Apple prefer to use their leverage to compete against another ecosystem as opposed to setting their own off against each other.

One of the biggest changes in the tech industry in the last ten years is the wider appreciation of the need to own your own customer relationship, and to prevent others from grabbing yours.


>> Samsung clearly do not share that opinion, and justifiably so.

And Samsung would be wrong.If Google hasn't decide to add Samsung to android, it's very likely that Samsung would be in somewhat similar fate to that of microsoft. But Google did share it with them , and they did make lots of money, and probably still do.

Sure they would want more, and more control, etc. And android is a tough business. But still Open Android was a great thing for Samsung.

As for Apple behaving better towards their partners - it's hard to tell , because they target a different customer segment than android so the dynamic is different.

But if you look at every big company - MS, Amazon, Wireless carriers - commoditizing a complement is a very common technique.


How is that a problem, though?


Or the Y(C)-Factor, where they just televise the YC process.

Given the bizarre effect the X Factor had on the entire ecosystem it operates in you could make a very strong case that it would be a massive net benefit to all participants.


dragon's den?


This is entirely true, and it's omitting the worst aspects of the situation relating to revenue and platform momentum.

As someone that has overseen tens (possibly hundreds) of millions of installs driven off the Play Store in my time it pains me to admit that economically speaking Android does not make sense at all today. My impression is many of the HN crowd are in denial about this. Two or three years ago things did look very different.

Reflexively I often like to blame poor stewardship for the situation, but in more sober moments I've come to think that the way Android is distributed and the OHA operated is structurally unsound. The surprising aspect of it is just how successful Apple have been at cultivating an audience composed of the vast proportion of valuable customers, and had they not had such success then the open source Android may have worked out better.

This pisses me off massively because Android makes all sorts of more technically interesting end user apps possible, but with a business hat on if it can't also be made to work on iOS it's not worth doing.


> As someone that has overseen tens (possibly hundreds) of millions of installs driven off the Play Store in my time it pains me to admit that economically speaking Android does not make sense at all today.

> My impression is many of the HN crowd are in denial about this

Oh, the irony here is great


Lollipop is bad enough that Touchwiz, on devices like the Note 4, is a massive improvement over stock Android.


That's the first time I've ever seen that claimed. In what way does Touchwiz improve on Lollipop?

On my Note device, I've done everything I can to hide Touchwiz and go back to stock Lollipop because of how much better it is.


I tried both and I was all about going back to iPhone, until I tried Cyanogenmod, which puts (a lot of) the sense back into Android. It uses the Google launcher (without the Google Now page integrated on swipe left), but it removes the insanely annoying design decisions by Samsung, e.g. displaying a confirmation dialog when you increase the sound volume over a certain threshold even with your Phone locked in your pocket and you bombing it on your bike downhill. Or the "cannot use camera" and "sorry, dimming your screen" on 5% battery threshold. I don't know what they are smoking at Samsung, but it's not good.


> displaying a confirmation dialog when you increase the sound volume over a certain threshold

Don't blame Samsung for that, my Nexus 7 did that as well. And in Netflix the dialog appeared behind the active window so you couldn't see it or hit the button.


Critics of Android embraced a meme that Lollipop was Android's Vista, substantiated by users anecdotal claims about poor battery life from a given upgrade, or the changes to notification levels, etc. So you still see the echos of that. I've never seen someone go so over the top to claim that it justified vendor skins, though, so this is a new pinnacle.

To everyone else it was an incremental update.


Yeah, the ritual of rebooting the device every two days because of the memory leaks and camera crashes were quite an incremental update over stable Kitkat.


This is more of a comparison of Lollipop vs. Lollipop + TouchWiz rather than Lollipop vs. KitKat.


Even ignoring the placebo effect, and a contingent of people carrying forth a message between advocating the amazing world of task killers, what in the world would a skin have to do with that? TouchWiz and others are literally layered over stock Android, so if there was a fundamental issue it would have the same issue. The notion that the skins are somehow superior is nonsensical, despite the comical, ignorance-induced downvotes I've predictably received.


Speaking as an Android user/developer:

Lollipop's first release was a bit of a mess in terms of bugs (it has since gotten a lot better, at least on devices that have kept up with releases, like Nexus devices), but I'd still run a stock 5.0 release as a daily-use phone OS over any Touchwiz release ever.

I can't begin to describe how terrible I find the Touchwiz interface relative to stock.


It's interesting to see a wider acceptance of the reality of the mess of Android.

Really the shocking thing was how good the iPhone 6 was, and how ready to switch to iOS places like Korea were when the larger devices became available, when previously it had been assumed those markets were lost to Apple. The emerging social class type distinction emerging globally between Android and iOS users is a real and growing problem for Google, especially combined with Apple's probable search engine launch.

The noise from my network in Android land has been that Lollipop remains a disaster. Easily the worst version since Android became popular, and it will be very curious to see what, if anything, Google have proposed for the resulting mess at I/O.

It says quite a lot that the most exciting thing about Android for I/O is the rumoured stripped down headless version for Internet-of-Things devices, which may become an accidental foundation for a cleaned up future Android proper.


> network in Android land has been that Lollipop remains a disaster

That's merely anecdotal, and my own anecdotes run in the opposite direction.

I'm building https://recent.io/ for Android and iOS and have a Nexus 5, iPhone 5, and iPhone 6 Plus on my desk as test devices as I write this. I use both OSes, though I do use iOS a bit more.

It's true that early versions of L were less than stable, though Apple has had the same problem. Another problem is slow adoption; only in the last few weeks, I think, has the Galaxy A3 been L-upgradable.

The saving grace for Android L is Material Design, which is finally a strong unifying design language at least as good as what Apple has to offer and IMHO better with at spanning different device sizes. It's well thought out and will make Android apps easier to use (and Android generally easier to use) by increasing UI/UX standard interactions. That's anything but a "disaster."


I don't have the contacts to know if my opinions with Android are even vaguely representative. I'm interested in what your network think is wrong with Android Lollipop, and without any sarcasm or snideness can I please ask who your network represents just to get a sense of how representative it is?


From a user who's devices just all upgraded a short while ago, it has all the usability problems of the new google maps. Things are more animated/transition-y/slid-y, but hell if I can figure out what to push or where a desired setting is.

It looks great, but it's not great for usability.


Can you elaborate more? Since I'm finding Lollipop significantly more usable in pretty much all respects - UI is more thought out, runs faster, battery life has significantly improved, dead Wifi detection is more reliable, timed audio profiles are a godsend...

Can you be a bit more specific? Are you perhaps using a Samsung TouchWiz device, where Samung pretty much removed most of AOSP UI improvements?


Here's a quick example, on my shield tablet, when videos fullscreen, the button to make it go full-screen/embedded disappear for no reason after a few seconds when it's full screened, meaning to escape a full-screen video, I have to hit the home or back buttons. This is a new behavior and super annoying.

On my phone, under settings, there's no obvious button to push to change wi-fi settings. There's a skeumorphic switch to turn it off and on, but no obvious button. In fact there's no obvious button for all the settings, just the label and an icon, but apparently if you push the label for Wi-Fi it's actually a button, but there's literally no affordance that it is such a thing.

Strangely, when you dive deep and view the app info for an individual app, there's buttons and pushable things everywhere.

The new google apps are a mess as well. Here's the process to change users in the gmail app

1) hit the hamburger menu (what could be there?) A panel slides out, which for some reason doesn't go all the way across the screen, because I need that 20% of my inbox that shows me nothing at all to stay visible

2) I have 3 colored circles with faceless people icons, they switch between three of my accounts, where are the others? I have no idea which one is which, so I push one, get dragged in the inbox for that one, but it doesn't tell me which one I'm in until I hit the hamburger menu again.

3) I'm of course not in the right one, and the one I want to be in is not one of the three choices. Where to now?

4) I see a list of folders, labels, and other crap, does it scroll? Apparently it does (oh, and apparently settings is all the way at the bottom of that scroll list, that will be easy to find) but there's no indication that it's a scrollable thing.

5) Nope, no list of accounts to choose from. Where to now?

6) Oh...the name of the account that I'm currently in is a thing I can push (again no affordance) and if I push that for some reason the little down arrow triangle next to my account name turns to face up and a list of user accounts appear. I thought the arrow was just telling me to look down for stuff about my account. In other GUI metaphors, a down arrow means a selection list is unrolled, so it turning up means that something should have rolled back up. Why do I push my account name to find other account names?

Absolutely a mess.

I could go on, maps is something I use almost daily and it's a similarly painful experience. It's like google just said "fuck it, bury it all under hamburger menus". But didn't google get rid of the menu button on new android phones to force designers not to bury stuff under the equivalent? Remember when android devices had a built in menu button and a search button? Now that garbage clutters up every app screen and lazy designers just bury stuff under them. At least with the menu button I knew what to press to look for options. Now I can't even figure out what's a button, and the buttons they do show use faceless icons that mean nothing.

The new chrome also, with it's "let's have every web page be an activity on the app stack" is terrible. Now I have to thumb through a pile of long-since closed apps to find the web page I was reading yesterday. Thank goodness somebody had the good thinking to revert that nonsense.

It goes on and on, hiding needed interface options under 2 or 3 deep layers of hamburgers, making buttons invisible, weirdly out of place skeuomorphic sliders, removing affordances...hell, I have to hit "agree" every single time I turn on the location finding thing to consent to location sharing. EVERY SINGLE TIME.

They've completely lost the plot.


> In fact there's no obvious button for all the settings, just the label and an icon, but apparently if you push the label for Wi-Fi it's actually a button, but there's literally no affordance that it is such a thing.

This annoys me. I see what they were trying to do by giving you more options but the touch target for small for the label button that I fat-finger this a lot and end up doing something like turning off Wi-Fi.


> It says quite a lot that the most exciting thing about Android for I/O is the rumoured stripped down headless version for Internet-of-Things devices, which may become an accidental foundation for a cleaned up future Android proper.

Can you point me somewhere I can read more about this? i.e. not just the Google serach results, but any specific reading if you have suggestions. I'm an Android developer working with a company which is doing some very exotic things with custom hardware, and would kill to have something like this.


Why is Lollipop a disaster? It introduced many great things.


Like memory leaks. Greatly problematic.


Is this an example of Poe's law?


Android can run on $40 phones and therefore widely used .To me that's the most exciting thing about android ,that's almost as huge as what Microsoft did in the 90s ,you know making actual impact. There are more people talking to Google voice than Siri


There are many more people talking to Siri today than used Microsoft software in the 90s. Apple sells more iPhones in a quarter than the annual global sales of computers in 1995.

Mobile is big enough for at least iOS and several flavours of Android. Speaking of which, those $40 Android phones don't come with Google services do they, so is anyone talking to them?


Why are you comparing today's Siri with Microsoft from 1995? The comparison is between present day Google and Apple.


Not really.

I've had to deploy lots of soft real time apps on GCed environments over the years, and it's always a problem. You can work around it with things like object pools, but some library or API will assume that the GC is OK and will be quietly spitting out objects continuously which will lead to a GC pause.

It's worth pointing out the Android devs finally started noticing this for Lollipop (probably due to their animations) and the API now has lots of places where it passes Java primitives instead of objects, which is the distinction between passing by value and by reference. Even if you're in C++ modern compilers can only make the most out of it if you pass by value, as this enables all sorts of other optimisations to kick in.

The key benefit of reference counting is it's predictable. Real time systems are also not strictly the lowest latency, they are defined by predictability. This becomes a preoccupation with minimising your worst case scenario.


Jellybean and Lollipop didn't get better smoothness by replacing objects with primitives, the Android API is fixed by backwards compatibility requirements. They did it through a mix of better graphics code and implementing a stronger GC.

If you look at the most advanced garbage collectors like G1 you can actually give them a pause time goal. They will do their best to never pause longer than that. If pauses are getting too long they increase memory usage to give more breathing room. If pauses are reliably less, they shrink the heap and give memory back to the OS.

Reference counting is not inherently predictable and can sometimes be less predictable than GC. The problem with refcounting is it can cause deallocation storms where a large object graph is suddenly released all at once because some root object was de-reffed. And then the code has to go through and recursively unref the entire object graph and call free() on it, right at that moment. If the user is dragging their finger at that time, tough cookies. GC on the other hand can take a hint from the OS that it'd be better to wait a moment before going in and cleaning up .... and it does.

It gets even worse when you consider that malloc/free are themselves not real time. Mallocs are allowed to spend arbitrary amounts of time doing bookkeeping, collapsing free regions etc and it can happen any time you allocate or free. With a modern GC, an allocation is almost always just a pointer increment (unless you've actually run out of memory).

The problem Apple has is that their entire toolchain is based on early 1990's era NeXT technology. That was great, 25 years ago. It's less great now. Objective-C manages to excel at neither safety nor performance and because it's basically just C with extra bits, it's hard to use any modern garbage collection techniques with it. For instance Boehm GC doesn't support incremental or generational collection on OS X and I'm unaware of any better conservative GC implementation.

Some years ago there was a paper showing how to integrate GC with kernel swap systems to avoid paging storms, which has been the traditional weakness of GC'd desktop apps. Unfortunately the Linux guys didn't do anything with it and neither has Apple. If you spend all day writing kernels "just use malloc" doesn't seem like bad advice.


Objective-C uses autorelease pools so deallocation doesn't happen immediately when the refcount goes to zero. Its reference counting implementation is smarter than a simple naïve one.

Apple's GC implementation wasn't a Boehm GC [1].

It's true that it's hard to use a tracing GC with Objective-C, because of the C. But, if you want interoperability with C, you're kind of stuck.

[1] http://www.reddit.com/r/programming/comments/2wo18p/mac_apps...


> The problem with refcounting is it can cause deallocation storms where a large object graph is suddenly released all at once because some root object was de-reffed

This only happens if you choose to organize the data this way. This is a big difference from GC, where the whole memory layout and GC algorithm is out of your control.


Depends on the language. Go, for instance, gives you a lot of freedom when it comes to memory layout, and allows you stack allocate objects to avoid GC.


If you're using the stack, by definition you're not using GC, so this cannot be an argument in favor of GC.


Im not arguing in favor of GC. The argument was that a GC takes away memory control, but memory control is up to the language.

Go doesn't restrict you to stack/heap allocation. You can create structs which embeds other structs. This simplifies the job the GC has to do, even if you don't allocate on the stack.

You can do something similar with Struct types in C#.


Better graphics code does mean what I'm on about. It's about removing any triggers for GC, which means removing allocations.

If you're blocking your UI thread with deallocating a giant graph of doom then you have other problems. Deferring pauses, however, is not a realistic option.


I was referring to the triple buffering, full use of GL for rendering and better vsyncing when I talked about graphics changes, not GC stuff. That was separate and also makes things smoother but it's unrelated.

Deferring pauses is quite realistic for many kinds of UI interaction and animation. If your animation is continuous/lasts a long time and requires lots of heap mutation then you need a good GC or careful object reuse, but then you can run into issues with malloc/free too. But lots of cases where you need something smooth don't fit that criteria.


It's worth pointing out the Android devs finally started noticing this for Lollipop (probably due to their animations) and the API now has lots of places where it passes Java primitives instead of objects, which is the distinction between passing by value and by reference.

This is simply not true. The API has always been heavily based on Java primitives. They didn't even use enums in the older APIs, preferring instead int constants. (I hate that one personally). GC pauses have always been a point of focus for the Android platform.


For example: http://developer.android.com/reference/android/graphics/Canv...

Notice the introduction of methods with API level 21 that recreate existing functionality without RectF objects being allocated. Touch events, for example, still spit ludicrous amounts of crap on to the heap.

All this is why the low latency audio is via the NDK as it's basically impossible to write Android apps which do not pause the managed threads at some point. Oddly this is stuff the J2ME people got right from day one.


That's terribly ugly. Can they not do escape analysis or something to avoid allocations in obvious places? Or only allocate when the value is moved off the stack?

Doesn't Android use a different flavor of Java anyways, allowing them to make these changes?


Yes, it's possible in theory, but Dalvik/ART don't do it. HotSpot does some escape analysis and the Graal compiler for HotSpot does a much more advanced form called partial escape analysis, which is pretty close to ideal if you have aggressive enough inlining.

The problem Google has is that Dalvik wasn't written all that well originally. It had problems with deadlocking due to lock cycles and was sort of a mishmash of C and basic C++. But then again it was basically written by one guy under tight time pressure, so we can give them a break. ART was a from scratch rewrite that moved to AOT compilation with bits of JITC, amongst other things. But ART is quite new. So it doesn't have most of the advanced stuff that HotSpot got in the past 20 years.


Yes, this is why I always find sad that language performance gets thrown around in discussions forums without reference to what implementations are actually being discussed.


J2ME got some bad naming due to the firware problems that OEMs brought to the platform, but Android is becoming worse than it.


Reference counting does not provide any guarantees when objects get deallocated on its own either, any removed reference may make the counter zero and trigger a deallocation. It may of course be worse with a full blown garbage collector building up a huge pile of unused objects and then cleaning them up all at once. But that is not a necessary limitation, there are already garbage collectors performing the entire work in parallel with the normal application execution.


Objective-C uses pools, so deallocation doesn't happen automatically when the reference count hits zero. Apple's reference counting implementation is fairly smart.

Over on the Reddit discussion there was a comment from Ridiculous Fish who at least was an Apple developer (and probably still is) and worked on adding GC to the Cocoa frameworks,


Basically, because of interop with C, there's only so much you can do. Plus, the tracing GC wasn't on iOS so if you want unified frameworks (for those that make sense cross-platform), supporting the tracing GC along with ARC is added work.


This is dead right.

The fun begins when you think about how this impacts anything that tries to scrape web pages, because the side effect is going to be a lot of impenetrable data silos.

Google and co will have to actually run web browsers in the cloud and use OCR to do indexing if they aren't already.


My guess is we'll come up with a sort of standard API using JSON or such that we will make available the information and ultimately mimic what HTML does now.


Let's use an xml-like (but not really xml) syntax instead. And we'll make it so the browser can view this format natively (with no javascript).


This actually made me giggle. What a ridiculously backward situation we find ourselves in.


Maybe Google could write a performant DOM in Chrome before it is too late?


See Project Ganesh, demoed at the Chrome summit late last year and apparently in Chrome 41:



Underneath this is the simple fact that educational standards in Quebec are disgracefully bad. A frightening proportion of kids never graduate high school either, and making it into McGill is near the upper reaches of achievement for those that do.

This is going to provoke a lot of kneejerk "but it's worse in [x]". No, in the developed world it's probably not. There are lots of dynamics unique to the Quebec situation which allow this to perpetuate.

Edit to add a useful reference: http://www.cbc.ca/news/canada/montreal/dismal-dropout-rates-...

"Last year alone, only 40.6 per cent of the boys followed in the 2007 cohort at the French-language Commission scolaire de Montréal graduated in five years."


That. Growing up in Quebec, I can honestly say that my mathematical education was really, really bad.

I remember spending a whole year where we had no math teacher, so instead we had one of the french language teacher teach the class. We did mostly math related crosswords.

All the kids were failing the class, so they simply made us all pass. Great job, school.

I still have a lot of issues because of this. Hard to learn advanced concepts when your basics were screwed up.

That being said, one of the beautiful reason that not many Quebecers don't make it to university is the fact that the CEGEP can throw you into the work force for a low price and a short amount of time. I wouldn't want this part to change. High school however? The whole program is a mess.


The public French school boards tend to have lower passing rates in maths and sciences then the public English boards though, chalk it up to the shit-show you need to be allowed to learn in English or whatever else might be at play. But the math and science reforms that Quebec implemented 8 years torpedoed basic math and science proficiency across the board.


What makes it so hard to find a math teacher in Quebec?

I've traveled through Quebec a number of times, including a bicycle trip that went through Chibougamau - as a visitor, I love the province. I can understand that some remote areas of Quebec are hard to staff effectively. Is this more than a rural/ urban issue?


We have a lot of underfunded school. The particular school I am talking about is next to a train track and a metal foundry (everything was covered with yellow dust and there was smog all over) and uses a parking space as a recreation area (hey, you can play hockey on it during winter...). In 2009, one of the wall/window fell during winter and the kids had to wear their coats indoor for a month, since plastic garbage bags and duct tape was all that was covering the hole.

As for what happened to the math teacher in question, I believe that she had left the school because she had to teach multiple groups (can't remember how many groups, but it was without a doubt too many) that were too bigs and filled with kids that shouldn't have been there in the first place. Learning disabilities, violent teens, etc. all crammed in a small room, groups of... I think it was 35 students.


Note that the stat I pulled out is from a school board in Montreal.


Well to be fair, we don't know that the 6 students called were from Quebec. McGill is a fairly popular school for kids from Ontario and the Maritimes provinces too.


And Americans. It's a cheaper (or was given recent international tuition increase) option for those who can't afford even in state tuition in some states.


As anything else, the situation is more nuanced. Quebec schools as a whole are pretty good: http://www.theglobeandmail.com/globe-debate/editorials/quebe...

The problem is that good students are mostly in private schools and specialized programs (international programs) so "regular classes" in public schools contain a lot of low performing students and students with disabilities.

My gf is a public high school teacher in Montreal and they do miracles everyday with the low amount of resources that they have to work with. She has a M.Sc. in her specialization (history), but do mostly special education tasks since the student level are so low.


So if 60% of the 80% condemned to use the public school system are failing to achieve the already lowered standards this is mere nuance?

What you have in Quebec is an elite (both english and french speaking) that gifts their children a private bilingual education with actual competition while they actively deny those rights to the rest of the public. Poor monolingual french speakers are actively screwed from birth to such a degree that they don't even notice how bad it is.


I'm just saying that we must be careful citing CSDM numbers because they represent a special situation. The CSDM have a lot of first generation immigrants, poor students, students with learning disabilities, etc. The middle class students are in good schools in the suburbs or in private schools.

The CSDM in Montreal has such a bad reputation that if you're not accepted in a specialized program (let say, international baccalaureate), you go to a private school if you can afford it (4k$/year). And don't let me start about union rules for new teachers...

On the other hand, if you're a special ed teacher, CSDM is hiring like crazy. Not so much for math, social science, french, English teachers though.


The CSDM is not a special situation at all. You can go to the townships, Lanaudiere, through every Montreal suburb and find the same phenomenon at work.

The root problem is Quebec's monolingual french speakers have been force fed a diet of anti-intellectual nonsense for so long they no longer see the value of education. Hockey is seen as the way out, but failing that the government will always be guilt tripped into paying welfare for them.


This. But this is not because "good students are mostly in private school", it is because public school is just bad.

I am a Quebec resident and I have been, for most of my high school, in a private school. I went one year to a public school: the level of education was SO poor and the students' motivation was the lowest I had ever seen.

There is an huge disrepency between public/private sector and people are trying to cut down private school funding[1].

I've seen both, and public school is a disgrace, no kid should have to go through this. I've seen teachers insulting students (and vice versa), teachers being hangover on a class day and telling the students to read their book, teachers raging against students (and vice versa) and just classes being generally content-less.

[FR] [1] http://www.lapresse.ca/le-soleil/opinions/editoriaux/201407/...

EDIT: Not saying we should abolish public school, but it HAS to improve. My experience (and what I have seen of people going into higher eduction) was horrible. Private school has good students because it has (majorly) good teachers (and some selection, I admit), whereas public school has a dominance of bad teachers.


> No, in the developed world it's probably not.

You could even argue about developing world. Children usually have a strong motivation to learn, and unless there is civil war, they do.


Is this based on CRDTs? ( https://en.wikipedia.org/wiki/Conflict-free_replicated_data_... . . . or ideas similar to that).


Yes, it is very similar. I'll actually be building some CRDT plugins ontop of GUN core. CRDTs usually deal with specific data types. Interested in helping?



Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact