Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is the most exciting development in your field right now?
516 points by yellow_viper on Feb 5, 2017 | hide | past | favorite | 424 comments



I'm entering radiology residency, and I'm very pro-automation / machine learning. There's a contentious debate in the field about whether radiologists will be replaced: https://forums.studentdoctor.net/threads/will-ai-replace-rad...

HackerNews is very developer-focused. If you guys saw what a radiologist does on a 9-5 basis you'd be amazed it hasn't already been automated. Sitting behind a computer, looking at images and writing a note takes up 90% of a radiologist's time. There are innumerable tools to help radiologists read more images in less time: Dictation software, pre-filled templates, IDE-like editors with hotkeys for navigating reports, etc. There are even programs that automate the order in which images are presented so a radiologist can read high-complexity cases early, and burn through low-complexity ones later on.

What's even more striking is that the field of radiology is standardized, in stark contrast to the EMR world. All images are stored on PACS which communicate using DICOM and HL7. The challenges to full-automation are gaining access to data, training effective models, and, most importantly, driving user adoption. If case volumes continue to rise, radiologists will be more than happy to automate additional steps of their workflow.

Edit: A lot of push back from radiologist is in regards to the feasibility of automated reads, as these have been preached for years with few coming to fruition. I like to point out that the deep learning renaissance in computer vision started in 2012 with AlexNet; this stuff is very new, more effective, and quite different than previous models.


20 years ago I did some software to analyze satellite images of the Amazon to monitor deforestation. We got a result that matched the quality of human experts. The problem has always been political and economical, not technological.


I agree, nevertheless it's exciting to see some progress being made (see the heart flow and iSchemaView companies cited below). Incidentally, there is a new angle to the radiologists push back - interventional radiologist are generally quite in favor of automating reads. This year IR is its own specialty, but coupled to three years of DR training. This is the speciality that I'm entering. I want to be an interventionalist while advocating for the adoption of machine learning systems on the diagnostic side.


You are right that work in these areas tend not to be hampered by technology. It sometimes may even be hazardous to ones health considering the value of timber. 20 years ago is a long time ago, GIS must be still at its infancy then. These days one could pull free satellite images from NASA and probably could just diff images if not for those pesky clouds. Oh wait actually I think NASA does do earth imagery at various spectral ranges, https://www.odysseyofthemind.com/aster.htm.

Curious to know what sort of methods you used then if you don't mind sharing.


Excuse me for the delay to answer.

The best results were with backpropagation neural networks: http://www.sciencedirect.com/science/article/pii/0888613X949...

But we also used fuzzy logic neural networks with genetic algorithms: http://ieeexplore.ieee.org/document/712156/?reload=true

It were 24 years ago.


I'm not sure if there was more to your story that you missed. Was your software successful? Is it in use today? Is automation widespread in the study of Amazonian deforestation?


Software not being used all over the place does not mean the software is bad, generally.


No. No. No. :-)


The new means of production has now been realized and the need to replace capitalism is now at hand. Hackers of the World Unite!


How did your software work, what techniques was it using, and what was the surrounding context relating to the data that was fed in and the output the model gave?

There might be some interesting things that can be learned from this kind of info and applied to the current status quo (I'm definitely not arguing that there is a sociopolitical element).



I suppose full automation may be a long way off, but maybe in the mean time we can do both? Have the radiologist evaluate the images, and then look at the report the AI generated and see if they agree. Use the radiologist to help train the AI, and use the AI to double-check the radiologist.

Maybe if MRI scans get cheap enough (due to advances in cheap superconductors or whatever) that it's economically feasible to scan people regularly as a precautionary measure (rather than in response to some symptom), then the bulk of the cost might then be in having the radiologist look at the scans. In those "there's nothing wrong but lets check anyways" cases, it might be better to just have the AI do it all even if its accuracy is lower, if it represents a better health-care-dollar-spent to probability-of-detecting-a-serious-problem ratio. (If the alternative is to just not do the scan because the radiologist's fees are too expensive, then it's better to have the cheap scan than nothing at all.)


PACS developer here - I asked the same thing a few years ago when I started in pACS about automation. I was told it's the need to have someone to sue that holds back the automation, not so much the technology.


Wouldn't that objection disappear once automation is more reliable than a human operator, since the human operator would be less likely to be sued when taking responsibility for an automated system's results than they would for simply trying to make the call themselves?


It's more about who gets sued. If you make software to fully automate interpretation you would be the party sued. If there is still a human in the loop, you are not liable for their errors.


I don't understand how that works - if you're an employee of a company, generally the company is liable for mistakes you make (unless you're also a director of the company). So for the company, whether they choose to provide their services based on the output of some software or based on the judgement of a human employee, the result should be the same.

I can see an argument that if the company was sued then it could try to push the blame onto the software vendor, but surely that would be decided based on the contract between company and software vendor, which is usually defined by the software license.


The person operating the machinery is not working for the company that makes it though.


This seems absurd. Are there any examples you can point to, or legal principle you are referencing here?


But in many areas of medicine, solutions have existed that outperform humans for years (not related to current deep learning wave). Yet they were not implemented for regulatory/legal reasons.


Can you share any examples? I'm a non-medical human and curious.


This suggests a lucrative area of legal practice: suing for negligently failing to automate.


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3372692/

Machine learning is already used in Radiology. Chances are eventually Radiology will be the domain of machines. But it's going to take some time to get there. Healthcare is extremely regulated and closed minded.

Most of the people in the thread you listed above are clearly biased towards medicine and against computer science and machine learning. But machine learning has been having success in diagnostic medicine even well before the deep learning boom that thread talks about.

For example: http://ieeexplore.ieee.org/abstract/document/1617200/


Agreed, in theory it's possible to create a tricorder-like system. Basically, radiology and advanced diagnostics in a box.


As someone who just struggled through writing a DICOM parser, _standardized_ doesn't always mean the same. For more examples, see RETS in the real estate world :)


For those curious:

- Some RETS recorded requests/responses: https://github.com/estately/rets/blob/master/test/vcr_casset...

(Basically some XML-based (SOAP?), cookie + authorization, that seems very ASP.net / Windows Server centric)

- DICOM ("It includes a file format definition and a network communications protocol"): https://en.wikipedia.org/wiki/DICOM

It's basically how to imaging devices communicate and store the images.

Image examples: http://www.osirix-viewer.com/resources/dicom-image-library/

A video of what a doctor would see: https://www.youtube.com/watch?v=Prb5lcR8Jqw

TCP-based protocol in Wireshark: https://wiki.wireshark.org/Protocols/dicom

I wrote a little .dcm to .jpg based on ruby-dicom BTW: https://gist.github.com/Dorian/9e3eb5891b49926c15a05c641ffef...

- PACS seems just like a database model basically http://mehmetsen80.github.io/EasyPACS/

It's the server that is gonna give the info to the doctors.

It's interesting how there seems to be only one popular viewer: OsiriX


Thank you! This helped hugely


When I looked at HL7 a few years back, it was standardized on in the most technical sense and was basically a big bag of hurt. Epic made a big chunk of their money just charging people to build interfaces over various PACS (and other) systems. Has that changed recently?


HL7 has matured even more now. The linear evolution from HL7 v2 to HL7 v3 did not win the day. Most are now moving to https://www.hl7.org/fhir/

Main issue is with HL7 is not technical. From a business point of view, the incentive to cooperate with other systems via HL7 means another reason for a department to adopt a system other than yours.


Ruby makes working with things like HL7 super easy. Check out the super extensible HL7 ruby parser I wrote here: https://github.com/sufyanadam/simple_hl7_parser. It currently focuses on parsing ORU messages but can be easily extended to support any HL7 segment. Feel free to submit a PR :)


Yes and no. I spoke with an Epic engineer recently and he confirmed that Epic is still a front-end to HL7 databases. With that said, many PACS are adopting the WADO standard (https://www.research.ibm.com/haifa/projects/software/wado/), which provides a REST interface to radiology images. It makes it a lot easier to retrieve images for analysis, albeit you'd still have to implement DICOM/HL7 if you want to make a usable product.


About 5 years ago while observing an echcardiogram I remarked to the radiologist that this was something that could be automated eventually. I don't think she took too kindly to my remark. She was measuring the distance between various structures etc. and I recall thinking about how to implement something to reproduce what she as doing.


I was talking to a friend in medicine about this recently. Is total automation actually required right now? As others have said, I'm sure politics is a big chunk of why that hasn't happened yet. What if we had tools to help radiologists identify where they should focus their attention in a particular image, and even give them some hinting specific to the contents of that particular image. It would not only save time, but have the added benefit of helping stave off errors caused by fatigue.


Can confirm. Have a friend who is in radiology residency, and I was completely shocked when I found out what radiologists do. When I told him that his job will be automated before he retires, he argued profusely against automation. But it's primarily an image recognition task, which computers are quite good at already and will likely improve.


Have you heard about http://www.heartflow.com/ ?


Yes! Stanford has started (at least) two radiology imaging companies. Heartflow and iSchemView (formerly RAPID): http://www.ischemaview.com/

These are examples of next-generation radiology companies. The current generation of products are focused on image storage and display. These new companies offer automated image analysis before the radiologist even looks at the image. iSchemaView does hemorrhage maps as soon as new head CT or head MRI is acquired.


> The challenges to full-automation are gaining access to data, [...]

It looks like everybody sitting on their data is hindering progress. Is there anything that can be done about that politically? I mean, in many cases the data belongs to the public anyway, unless people signed a waiver, but what is the legality of that?


Wow. It is crazy how uninformed you all are (no offense). Radiologists do not just look at an image and say "white thing there!". They incorporate the appearance, characteristics, anatomy, pathophysiology, patient's clinical history/age/medications/surgical history, and combine all of that information into some image findings but more importantly a focused differential diagnosis for the clinician. We have had computers reading EKGs for decades (a 2D line) and they still get it wrong 50+ percent of the time. No machine is taking over EKG's any time soon.

I'm sure machines will someday take over radiology but there will be many, many jobs automated before it (i.e. decades).


I wrote a really simple ruby gem to parse HL7 into ruby. It's super easy to extend, all you have to do is define a new class containing a function-to-column map. The key is the name of the function to call against a HL7 segment and the value is the position of the element in HL7 segment that you want the function to return. Check it out here: https://github.com/sufyanadam/simple_hl7_parser


I'm curious, how hard do you think it is to integrate new tech with the systems currently being used in radiology? Are the file formats and standards largely proprietary/closed?


It's not too difficult to make your own PACS, there's even open-source software for this. However, you need to be compliant and this usually means obtaining FDA approval of any device you want to install on the hospital network.


Have any of the methods made radiologists more efficient? If you were to imagine a system that made a radiologist 10x more efficient what would it look like?


Yes, massive increases in efficiency. A radiologist can read anywhere from 50-100 images per day (depending on modality CXR/MR/CT/etc). Voice dictation is ubiquitous and residents are trained from the beginning on how to navigate the templating software.

There are three areas that take a lot of time that radiologist would like to see automated:

1. Counting lung nodules. 2. Working mammography CAD. 3. Automated bone-age determination.

Those are the hot three topics for machine learning. Personally, I think that a normal vs. non-normal classifier for CXRs would be more interesting because you could have a completely generated note for normal reads, and radiologists could just quickly look at the image without writing/dictating anything. Of note, hospitals and radiology departments typically lose money on X-ray reads because the reimbursement is $7-$20 (compared to $100+ for MR/CT). So if you could halve the read time, they might become profitable again.

Edit: In terms of 10x, what you'd want is a system that would automatically make the reads (i.e. radiologist report), and a very efficient way for radiologist to verify what is written. It's hard to make a pathologic read, but since roughly 50% of reads are normal, you could start with normal reports.


Before even going to AI/ML type of things, our startup (www.radfox.fi) is doing "simple" fixes to current workflows, first making sure radiology referrals are at the same time informative and decisive (=quality and accuracy of the referral)

And then bringing checklist driven analysis for radiologist.


my mom (an anesthesiologist) wanted me to become a doctor and do my masters in radiology.

So I decided to observe an radiologist at an hospital for a day (back in 2011), and I noticed that most if not all of it was already automated.

Radiologists were there to rubber-stamp the machine's work (and to ensure compliance, laws and what not).


My field is web development, and, to be honest, the most exciting thing going on is that more people are starting to complain about the complexity of development. Hopefully this will lead to people slowing down and learning how to write better web software.

As an example, one survey (https://ashleynolan.co.uk/blog/frontend-tooling-survey-2016-...) put the number of developers who don't use any test tools at almost 50%. In the same survey about 80% of people stated their level of JS knowledge was Intermediate, Advanced or Expert.


Yeah, this is one area where webdev is way behind other fields, and i think we're going to see lots of new tooling in this area soon.

We're currently working on a way to help devs test web app functionality and complete user journeys without having to actually write tests in Selenium or whatever. The idea to is let devs write down what they want to test in English ("load the page", "search for a flight", "fill the form", "pay with a credit card", etc), then we'll use NLP to discern intent, and we have ML-trained models to actually execute the test a browser.

You can give us arbitrary assertions, but we also have built-in tests for the page actually loading, the advertising tags you use, page performance, some security stuff (insecure content, malware links). At the end we hand you back your test results, along with video and perf stats. It’s massively faster than writing Selenium, and our tests won’t break every time an XPATH or ID changes.


>wants to reduce complexity

>tries to do so by using an imprecise, context-dependent language designed for person-to-person communication to instruct a machine

???

Selenium is its own can of worms, but it absolutely sounds like you're using the wrong tool for the job here. The problem stopping people from writing browser-based tests is not that people can't understand specific syntaxes or DSLs, it's actually the opposite: people don't have a good, reliable tool to implement browser-based testing in a predictable and specific way that does what a user would intuitively expect.

Selenium fails here because it has to manage interactions between browsers, because selectors are hard to get right on the first try and continually break as the page's format changes, because JavaScript can do literally anything to a page and that is really hard to anticipate and address reliably from a third-party testing framework like Selenium, especially if components are changing the DOM frequently, etc., because Selenium is subject to random breakage at the WebDriver layer that hangs up your (often long-running) script, and so on.

Whatever the right answers to a next-gen Selenium are, attempting to guess the user's meaning based on Real English by something that is itself an imperfect developing technology like NLP is pretty obviously not the correct toolkit to provide that. Remember, a huge amount of the frustration on Selenium comes from not having the utilities needed to specify your intention once and for all -- the ambiguities of plain English will not help.

If your thing works, it will have to end up as a keyword based DSL like SQL. SQL is usually not so scary to newcomers because a simple statement is pretty accessible, not having any weird symbols or confusing boilerplate, but SQL has a rigid structure and it's parsed in conventional, non-ambiguous ways. "BrowserTestQL" (BTQL) would need to be similar, like "FILL FORM my_form WITH SAMPLE VALUES FROM visa_card_network;"

The biggest piece that's missing in Selenium is probably a new, consistent element hashing selector format; each element on the page should have a machine-generated selector assigned under the covers and that selector should never change for as long as the human is likely to consider it the "same element". The human should then use those identifiers to specify the elements targeted. I don't know how to do that.

The second biggest piece that's missing from Selenium is a consistent, stable WebDriver platform that almost never errors out mid-script; this may involve in some type of compile-time checking against the page's structure or something (which I know is hard/possibly impossible because of JS and everything else).


Totally agree with this. The concerning part for me is that ML is about making a "best guess" given some data. This means that your tests may pass one time and fail another - inconsistent tests aren't tests at all.


Your post gave me deja vu to an automation workflow I cobbled together a few months ago, which I found wonderfully productive for steering a bot: Vimperator keybindings. Selenium can use most of them right out of the box. It's a terrific navigation layer. For instance, pressing "f" enumerates all the visible links on the page and assigns to each one a keybinding. The keybindings are displayed in tooltips and can be trivially extracted with CSS. You can keep sending keys to the browser, and only links that contain the anchor text remain in the set of candidates. Of course the "hashing" of links to keybindings is completely relative to the viewport, so this won't satisfy you completely. But it was an idea I had randomly one day, as an alternative to the trapeze act of navigating through the boughs of the DOM tree, and lo and behold it worked nicely.


Sounds intriguing. There are a few tools for recording interactions with a webpage in order to replay the actions as a test (Ghost Inspector, Selenium IDE, etc) but they tend to be pretty horrible. I've been working on my own as a Chrome extension for a little while. What you're building sounds really interesting though, especially if it can deal with complex Javascript apps. Anything that can make developers more inclined to test things is a good thing.


Alternatively, one of the consequences of React is that the front-end can largely be unit-tested. You can at least get a pretty good idea that the page will render what you expect if it gets the data you expect.

And whether or not it gets that data is a unit test in another place.

I'm not a huge fan of React, or javascript, but having been forced to work in it, this is one of the wins.


You could test the front-end before react though? Also, one can still have a God component. I don't think React changed anything from a testability point of view. Well written modular code is well written modular code.


I would argue that the case that React makes for testability is abstracting DOM manipulation in a way that is highly testable.


Frameworks like Knockout have been around for quite some time now. You don't have to use React to not depend on the DOM. There are many alternatives to "jQuery based front-end development" that's not React. Aurelia, for instance, happens to be an amazing framework in my opinion that's also highly testable and that's not React. Like I said, modular code is modular code. You can write good, modular code with just require js modules, and you can also write terrible monolithic React components.

Testability isn't the domain of the view layer.

Abstracting the DOM into a declarative DOM is great for performance, but doesn't lead to necessarily more testable code.


We are using ghostinspector mainly for it ability to compare screenshots between runs. I think the future of testing will be apps like this that don't require you to specify every little div but just record your actions and play them back and catch differences. Right now ghostinspector only takes a screenshot at the very end but they are adding a feature where you can take a shot anytime. As these apps get better at knowing what matters and what to ignore - all the better.


Yeah, I've looked at all the current tools and there's basically two types (other then just writing straight Selenium):

- Test recorders that aren't a great experience and output incomprehensible, brittle tests.

- Test composers that I can best describe as 90's SQL query builders for Selenium.

Complex JS apps are still a challenge for us (especially with some of the WTF code we come across in the wild), but we have a strategy in the works for them. We're still pre-release though. If you're interested, send me an email (donal@unravel.io) and I'll add you to our alpha list.


You are reinventing Gherkin. Please don't reinvent the wheel.


From reading the web page, I think an Unravel user won't need to specific, pre-defined language, in which case it's very deliberately not a DSL like Gherkin/Cucumber.


I've been interviewing junior/intermediate frontend candidates for the past few months now. 90% don't use any test tools, and their biggest complaint is their current employer forcing a new framework/library for the sake of being bleeding edge. While interesting to them, it turns out most of them really just want to see what they can do with vanilla JS.


Why you expect that junior developer (someone with very little or no development experience) will use a test tool or any other development technique? I expect that junior developer in software field should be able to program only.


I expect that junior developer in software field should be able to program only.

That is very often the case. It needs to change. Testing is a part of software development, and anyone who writes software should be aware of it. I feel the same way about documentation. And requirements. You can't write good software without knowledge of the processes that surround development. It isn't enough just to be able to write great code.


Maybe. Personally, I've come to think that you need the right tool for the right job.

If you spend more time writing / running tests that you would fixing the bugs they find, you may be doing it wrong. If you're writing documentation no one will read, you may be doing it wrong.

They clearly do have a place though. As for maintaining a set of requirements... I appreciate there must be some environments where what is required is well understood and relatively stable. I'm not quite sure if I should look forward to working in such a place or not!


I appreciate there must be some environments where what is required is well understood and relatively stable.

Actually there isn't. Every project, no matter how it's managed, changes as it goes on. It has to because you learn and discover things along the way. That's why maintaining and understanding project requirements and how they've changed is incredibly important. If you don't keep on top of them then you end up with a project that wanders all over the place and never finishes. Or you build something that misses out important features. Or the project costs far too much. Requirements are not tasks, or epics, or things you're working right not. They're the goals that the tasks and epics work towards.

(My first startup was a requirements management app.)


> (My first startup was a requirements management app.)

How did that work out? In the 90's it seemed every industry was switching to Documentum for that sort of thing.


> If you spend more time writing / running tests that you would fixing the bugs they find, you may be doing it wrong.

Why should those 2 activities be compared? They do not compare: writing/running tests is about discovering the bug, not fixing it. You still need to fix it after you have done your testing activity.

The time spent writing/running tests should better be compared to the time spent in bug discovery without tests, i.e. how much you value the fact that your users are going to undergo bugs, what the consequences of the users hitting bugs are, what the process to report them is, etc.


You're right, unless you're at an extreme (zero automated testing, zero bugs found in the wild) it's much more nuanced as to what the balance is (or should be), but there is a balance.


I think there's somewhat of a gap between "junior" and "junior/intermediate", but given my understanding of university / bootcamp curricula, it's probably both the case that it's unrealistic to expect junior devs to have meaningful testing experience and that it's essential to make sure that potential hires have some awareness / positive attitude toward testing as part of software development.


>Why you expect that junior developer (someone with very little or no development experience)

I don't think thats the definition of a junior developer. Test tools are apart of building software, you should be hiring devs that have created projects that use tests of some sort, if not with the technology you're using.

>I expect that junior developer in software field should be able to program only.

I don't know how you can have little to no dev experience and know how to program.


Development and programming require different skills.

Developer need to know development cycle, automated testing, continuous integration, software life cycle, ticketing systems, source control systems, branching and merging, cooperating, etc.

Programmer need to know programming languages, patterns, algorithms, computer internals, effectiveness, profiling, debugging, etc.

Junior developer (in software filed)) has no or little experience in development, so junior developer is almost equal to a programmer, which causes lot of confusion.


I think this distinction is a bit ancient...


Because we assume that developers are trained professionals, presumably with a CS or software engineering (or both) degrees, and that they've been properly trained in software development - which puts testing front and center.


Computer Science has absolutely nothing to do with software testing. Your software engineering classes will teach students about unit tests, but not much more.

If by 'testing' you really mean 'unit testing', as I suspect most junior engineers who claim testing experience do, then hope is already lost. The one saving grace is that there is enough churn in webdev that nothing lasts long enough to reveal how fragile it is.


Not if they take a good class in Test-driven development (TDD) - which I would recommend to students. The "science" behind it will outlive the practice churn.


Of course, if the will take a good class in test driven DEVELOPMENT, they will be developers. Development (problem solving with goal to create and support a product) is not same as programming (creating instruction for computer to do something).


Any recommendations for good TDD classes?


you're part of the problem


Really? Where I work we expect knowing how to test code and being careful and incremental in our junior candidates more than anything. It's easier to teach someone how to code better than let them send anything to production with 0 tests.


To me as a web developer, the most exciting new development is react native (not react itself) - it's redefining the border between web and native apps in a way that cordova and xamarin never did.


As a web dev, the most exciting development I see is the rise of progressive web apps and a shift away from native apps in situations where web-like experince is more appropriate.

That said, I'll be thrilled if React Native gives rise to higher quality apps in situations where a native app is unavoidable (e.g. my bank's app).


What kinds of situations do you think are more appropriate for web or web-like apps?


I'm actually hard-pressed to think of any non-gaming interface that is better suited by a native app than a web app in 2017.

Five years ago native apps made a level of UX possible that was unheard of on the web, to say nothing of mobile. But today not only has HTML/js closed the gap, but whiz-bang native animations aren't impressive just on account of being novel anymore.


I'm feeling that React Native is just another artificial constraint we developers have to deal with. I would have preferred it if Facebook (and/or Apple, Google) would have pushed WebApps more instead. A web browser, with the right amount of love, would be more than capable of doing the stuff that React Native can do.


Google still is! Progressive web apps now have the ability to be installed "natively" on Android devices [0], meaning they show up in the app drawer like any other app rather than being limited to a home screen icon.

I think these are the future. Once they catch on with mainstream consumers, native apps won't stand a chance against the convenience of simply visiting a website to install/use. Plus, on the developer end, we finally have a true "write once, run anywhere" situation that doesn't involve any complex toolchains or hacky wrappers.

[0] http://www.androidpolice.com/2017/02/02/progressive-web-apps...


That sounds absolutely fabulous!

But do you think that Apple would embrace this technology, given that e.g. their app-store is generating lots of revenue?


Not at first. But if the tide turns enough, Apple with have to jump on board to avoid losing market share.


Asking the right question. And that's not sarcasm.


They won't, but they eventually won't have a choice. If the experience is simply better on other devices, consumers will move.


There's a market imo for a full solution that includes front end and entire backend including deployment, seemless scaling, seemless upgrading, seemless backups, seemless local dev, seemless staging, etc.

99% of web apps need the same features but most of this is still up to manually rolling your own.

I should be able to clone some repo, enter some DO/AWS/GOOG keys and push.


I've been using and loving Zappa[1] lately. Basically it lets you seamlessly deploy a flask app to AWS Lambda -- that solves your deployment, scaling, upgrading, staging, backups, etc. And local dev is just running the flask app locally.

[1] https://github.com/Miserlou/Zappa


This solution is the opposite of "slowing down and learning how to write better web software".


I admit I'm not terribly familiar with it, so I might be misunderstanding, but isn't this more or less what AppEngine is?


The problem is that every developer creates his own way of doing these things and you have a complete mess without any standard way of doing things.


I think the most exciting trend in web development is the rise in popularity of functional programming styles in front-end frameworks.

This makes the complexity problem much easier to solve, as the code is (should be) less likely to cause an unanticipated mutated state which can't be easily tested for.


I remember this same "rise in popularity" in 2005. Every few years functional advocates get all excited (last time was F# support in Visual Studio, all C# programmers were going to switch, naturally).

I suspect the reality is a small subset of programmers think functional programming is amazing and everyone else hates it. You might think it reduces complexity, but a lot of people feel it reduces comprehensibility.


> I suspect the reality is a small subset of programmers think functional programming is amazing and everyone else hates it.

Yep. It's not that I hate it, I just don't like it. The thing is that the functional-praisers are much more vocal about how they love it whereas people who write imperative do not care much about Haskell.

We are happy with LINQ and that's all 99% of us wants/needs.


Would you mind sharing an example or pointing me to a good guide that explains this concept? How does functional programming make a problem less complex than OO or imperative? I've heard this a couple of times, but the intuition has never quite clicked for me.


Its state. Functional programs tend to have less state as their output is the same for some given input. With things like jquery you quickly introduce state, say is some dropdown open, which your next function will have to check is true or false before proceeding. And so on.


I'll give it a shot; functional programming style--some languages enforce it, some languages merely have features that allow for it if the author is disciplined enough to do so (JavaScript is in the latter camp)--generally eschews mutable state and side effects, i.e. a variable `foo` that is declared outside of the function cannot be altered by the function. Some "pure" functional languages restrict all functions to a single argument. This may feel like an unnecessary constraint (and opinions vary), but one thing that can't be denied is that it keeps your methods small and simple; in any case, it can be worked around by applying a technique called "Currying"[1], which is named after the mathematician & logician Haskell Curry, not the dish (also the namesake of the eponymous functional programming language [the mathematician, not the dish]).

Because nothing outside of the function can be changed, and dependencies are always provided as function arguments, the resulting code is extremely predictable and easy to test, and in some cases your program can be mathematically proven correct (albeit with a lot of extra work). Dependency injection, mocks, etc are trivial to implement since they are passed directly to the function, instead of requiring long and convoluted "helper" classes to change the environment to test a function with many side effects and global dependencies. This can lead to functions with an excessively long list of parameters, but it's still a net win in my opinion (this can also be mitigated by Currying).

A side-effect (hah) of this ruleset is that your code will tend to have many small, simple, and easy to test methods with a single responsibility; contrast this with long and monolithic methods with many responsibilities, lots of unpredictable side effects that change the behavior of the function depending on the state of the program in its entirety, and which span dozens or hundreds of lines. Which would you rather debug and write tests for? Tangentially, this is why I hate Wordpress; the entire codebase is structured around the use of side-effects and global variables that are impossible to predict ahead of runtime.

There is much, much more to functional programming (see Monads[2] and Combinators[3]), but if you don't take away anything else, at least try to enforce the no-side-effects rule. A function without side-effects is deterministic; i.e. it will always give you the same output for any given set of inputs (idempotence comes for free). Because everything is a function, functions are first-class citizens, and there are only a few simple data structures, it becomes easy to chain transformations and combine them by applying some of the arguments ahead of time. Generally you will end up with many generalized functions which can be composed to do anything you require without writing a new function for a specific task, thus keeping your codebase small and efficient. It's possible to write ugly functional code, and it's possible to write beautiful and efficient object-oriented code, but the stricter rules of functional style theoretically make the codebase less likely to devolve into incomprehensible spaghetti.

Manning Publications has a book[4] on functional programming in javascript, which I own but haven't gotten around to reading yet, so I can't vouch for it. However, it does seem highly applicable.

[1] https://en.wikipedia.org/wiki/Currying [2] https://en.wikipedia.org/wiki/Monad_(functional_programming) [3] https://en.wikipedia.org/wiki/Combinatory_logic [4] https://www.manning.com/books/functional-programming-in-java...


Is not only the front end the backend is also getting a fair bit of traction.


>the complexity of development

Huge and seemingly often unacknowledged issue these days. And many attempted solutions seem to be adding fuel to the fire (or salt to the wound) by creating more tools (to fix problems with previous tools) ...

Red (red-lang.org) is one different sort of attempt at tackling modern software development complexity. Its like an improved version of REBOL, but aims to be a lot more - like a single language (two actually, RED/System and RED itself) for writing (close to) the full stack of development work, from low level to high level. Early days though, and they might not have anything much for web dev yet (though REBOL had/has plenty of Internet protocol and handling support).


If you are counting on people slowing down, you're in the wrong business.

You're asking the wrong question. It shouldn't be "how do we get people to slow down?" It should be, "how do we make rapid software development better?"


It's probably just a matter of time. Because all software ecosystems goes through its phases of "maturity" regarding testing.

Not too long ago (in human-years, not internet-years). Most node packages weren't built with unit testing. Now its quite common in the popular packages.

Website UI is probably the same thing. After all, it took us a really long time till we got the whole HTML5 spec finally stabilised.

So you will probably see the tipping point occur over the next 10 human years, or less.

And just like you I been really frustrated with the inadequacy of UI testing tools, especially with Selenium. So like @donaltroddyn, I set out to develop my own UI testing tool (https://uilicious.com/), to simplify the test flow and experience.

So wait around, you will see new tools, and watch them learn from one another. And if you want to dive right into it, we are currently running close beta.


Then, considering that if people don't spend sufficient time/money on proper testing, how much time/money do they spend on security? Probably even less.


Container orchestrators becoming mainstream is something I'm very excited about. Tools like DC/OS, Nomad, Kubernetes, Docker Swarm Mode, Triton, Rancher make it so much easier to have fast development cycles. Last week I went from idea, to concept, to deployed in production in a single day. And it is automatically kept available, restarted if it fails, traffic is routed correctly, other services can discover it, the underlying infrastructure can be changed without anyone ever noticing it.

This also brings me to Traefik, one of the coolest projects I have come across in the last months.

Traefik + DC/OS + CI/CD is what allows developers to create value for the business in hours and not in days or weeks.


I've been researching container orchestration recently and I personally don't see the incentive to jump into containers from an infrastructure perspective. I think using packer/vagrant/ansible is pretty easy and meets my needs. The orchestration overhead for containers seems like overhead I don't need just yet. So the big question I've been asking myself is at what point will a AWS AMI be less versatile than a docker container, assuming it originated w/ Packer and I can build images to other clouds with packer. From a developer perspective I am very excited about containers and believe local dev w/ docker is warranted.


We mainly use Docker because it finally allows us to eradicate all the "Worked in dev" issues we had in the past. From an application perspective, Dev, Accept and Prod are identical.

Also, we deploy to production at least 4 times a day, the time from commit to deployable to production is about 30 minutes. And because it is a container it will start with a clean, documented setup (Dockerfile) every time. There is no possibility of manual additions, fixes or handholding.


I'm pretty excited for where it's taking things closer to the PaaS end of the spectrum. Been diving down that rabbit hole a bit in search of "easiest way for 2-3 devs to run a reliable infrastructure." Recently moved from EC2 to Heroku, which I'm pretty happy with, but not sure if will be more a stopgap or long-term. I like the direction OpenShift seems to be headed in.


> o the big question I've been asking myself is at what point will a AWS AMI be less versatile than a docker container,

AMI only runs on AWS. Docker runs on anything. I don't think "versatile" is the word you are looking for.


I think Lyfts Envoy is similar to traefik but also has logging/statistics, Routing and grpc support. And even more: https://lyft.github.io/envoy/docs/intro/what_is_envoy.html


I agree on kube + rancher + deis + traefik etc.. Can you elaborate on why and how you use DC/OS though?


We use DC/OS for all our stateless services, when we started looking at container orchestrators the bootstrap for DC/OS was very easy (automatic via cloudformation) and it was quite complicated for kubernetes.

We mainly use DC/OS to run more services on less instances.


Please take a look at the Cloud Native Computing Foundation (I'm the executive director) at cncf.io. We have a lot of free resources for learning more about the space.


Rancher + SpotInst is an amazing combination of convenience and cost effectiveness. I'd encourage everyone to give it a shot.


Honest question: this seems like you are making your own PAAS that you need to take care of, why not just use Heroku, etc?


The new development is that the software to run your own Heroku is becoming open source and easy to operate.

From an "I just want to get my app deployed" perspective it may still be best to just use Heroku. But from a "new developments in the field" perspective, the fact that I can rent a few machines and have my own Heroku microcosm for small declining effort is pretty cool.


Thanks for taking the time to explain it. I think sometimes I'm just too focused on my own stuff.


I had no idea most of these amazing projects existed. Thanks for this


Deep learning architectures built by machines (so we no longer have to design architecture to solve problems) https://arxiv.org/abs/1611.01578

Transfer Learning (so we need less data to build models) http://ftp.cs.wisc.edu/machine-learning/shavlik-group/torrey...

Generative adversarial networks (so computers can get human like abilities at generating content) https://papers.nips.cc/paper/5423-generative-adversarial-net...


I read the infoGAN paper yesterday. It blew my mind. https://arxiv.org/pdf/1606.03657v1.pdf. This is a way to do disentangled feature representation learning without supervision.


All these are definitely cool, but I think we're still a long way from leaving the "look at this cool toy" status and stepping into the "I can add value to society" status.

Furthermore, if we consider that most of these DL paper completely ignore the fact that the nets must run for days on a GPU to get decent results, then everything appears way less impressive. But that's just my opinion. I love working in deep learning, but we still have LOTS of work to do.


Could you elaborate? After running for days / weeks/ months the output is a net that can do inference in seconds, or with some now-common techniques milliseconds with only small reductions in accuracy. These nets can then be deployed to phones to solve a rapidly increasing number of identification tasks, everything from plants to cancer.

The time from theoretical paper to widely deployed app is smaller in DL than in any other field I have experience with.


It's true that there aren't too many practical applications of GAN's yet, but I'd argue that transfer learning is already pretty powerful. It's fairly commonplace to approach a compute vision task by starting with VGG/AlexNet/etc and fine-tuning it on a relatively small dataset.


There is a LOT of investment in model training right now, with frameworks, specialized hardware (like Google's TPU), cloud services, etc., not to mention the GPU vendors themselves scrabbling like mad to develop chipsets that accommodate this more efficiently.

It's going to take less and less time and money to train a useful model.


I have a hard time reading most research papers on deep learning, as they are geared towards mathematicians who already know that lingo.


>> Generative adversarial networks (so computers can get human like abilities at generating content)

Does this only apply to artistic content, or also to engineering content ? Say PCB layouts, architectural plans, mechanical designs, etc ?


Artistic only. You'd need an exact verifier in case of engineering designs, and something built with neural networks is not one.

To get a better understanding (other than reading a paper) read this excellent blog post https://openai.com/blog/generative-models/


GANs are a general tool -- they just happen to get a lot of attention for generating images of stuff. Here's an example for generating sequences [1]. The example is language oriented, but ultimately GANs are interesting because you can use them to build a generator for an arbitrary data distribution. This can have many applications in engineering (to take a random example -- generating plausible looking chemical structures under a certain set of constraints). As with any ML application, you need to quantify your tolerance for "inaccuracy" (in a generative setting, how well the generated distribution matches the true data distribution). This is simply an engineering trade-off and will vary based on the application.

[1] https://arxiv.org/abs/1609.05473


The approach was applied without any real knowledge of art, even though it has been applied to the domains you mentioned I don't see why not.

[edit]: it is a lot harder to build a NN when there are very constraint rules. But it is also a lot easier to verify and penalize it and generate synthetic data.


What's new in transfer learning?


It's all subjective, but as a data analyst I'm excited about probabilistic databases. Short version: load your sample data sets, provide a some priors, and then query the population as if you had no missing data.

Most developed implementation is BayesDB[1], but there's a lot of ideas coming out of a number of places right now.

[1] http://probcomp.csail.mit.edu/bayesdb/


Sounds like in many applications of machine learning (I'm thinking mainly of the swathes that name-drop it on a landing page, and probably usually mean 'linear regression') this could replace the brunt of the work.

e.g. store customer orders in the DB, and query `P(c buys x | c bought y)` in order to make recommendations - where `c buys x` is unknown, but `c bought y` occurred, and we know about 'other cs' x and y.

Is that sort of how it works?


That's exactly how it works.

The way I see, the real utility comes from the fact that domain models such as those in a company's data warehouse are typically very complex, and a great deal of care often goes into mapping out that complexity via relational modelling. It's not just that c bought x and y, but also that c has an age, and a gender, and last bought z 50 days ago, and lives in Denver, and so on.

Having easy access to the probability distributions associated with those relational models gives you a lot of leverage to solve real life problems with.


Would you be so kind as to provide several introductory articles to probabilistic matching of data? Fuzzy searching, most-probable matches, things like that?


BayesDB is being commercialized by startup Empirical Systems:

http://empirical.com (still dark atm)

co-founded by CEO Richard Tibbetts, who was also a co-founder of StreamBase (acquired by TIBCO).


For more details, check out this talk by Mansinghka, one of the main contributors of bayesDB: https://www.youtube.com/watch?v=-8QMqSWU76Q.


Interesting. Do you know anything about agent modelling? Any ideas how if/it ties to it?


The agent modelling that I'm aware of is in simulation. I have a feeling that there would be a lot of interesting duality between the fields of agent based simulation and monte-carlo based probabilistic modelling, but I don't know enough about the former to say off hand.


ABM is an MC method, because different individual agents randomize their behavior based on distributions associated with possible courses of action defined by their agent type.


What's agent modelling?


Here's an agent modeling framework I remember from the early 2000s:

http://jade.tilab.com/

I'm curious to know if it's related to the current discussion.


Isn't that fundamentally the Netflix problem? Or am I missing something?


Compilation:

- Meta-tracing, e.g. PyPy.

- End-to-end verification of compilers, e.g. CompCert and CakeML.

Programming languages:

- Mainstreamisation of the ideas of ML-like languages, e.g. Scala, Rust, Haskell, and the effect these ideas have on legacy languages, e.g. C++, Java 9, C#.

- Beginning of use of resource types outside pure research, e.g. affine types in Rust and experimental use of session types.

Foundation of mathematics:

- Homotopy type theory.

- Increasing mainstreamisation of interactive theorem provers, e.g. Isabelle/HOL, Coq, Agda.

Program verification:

- Increasing ability to have program logics for most programming language constructs.

- Increasingly usable automatic theorem provers (SAT and SMT solvers) that just about everything in automated program verification 'compiles' down to.


I work in CPU design. So I'd add that the tools for formally verifying CPUs have come a very long way in the last two years, and the next two years look like they will be very exciting indeed.


wow ! what tools are you guys using ?, do you have the same for microcode ? this is really interesting !


There is a big overlap in prover theory. HOL Light comes from Intelite John Harrison.

I don't know much about CPUs, but I suspect that one of the core problems of software verification, the absence of a useful specification, isn't much of an issue with hardware.


I'd also like to hear more about what tools you're using/looking forward to.


There's a new project to implement TLS in python [1] with the idea to have secure and verifiable code, but so far (to my knowledge) there's no formal tool involved in the verifiability aspect -- the approach is mostly around keeping the implementation as RFC-compliant as possible.

I'd be really interested in applying any of these techniques to a full TLS implementation.

[1] https://github.com/pyca/tls


I recommend to look at Cedric Fournet's group [1]. They have (or are close to having) a performant networking stack that's fully verified, e.g. [2].

[1] http://research.microsoft.com/en-us/um/people/fournet

[2] K. Bhargavan et al, Implementing TLS with Verified Cryptographic Security. http://research.microsoft.com/en-us/um/people/fournet/papers...


> Homotopy type theory.

Can you talk more about this? I even got THE book on this (haven't really read it yet though) and like I think I get the rough ideas but I'd be curious to hear what HTT means to you (lol).


Not the OP, but the cool thing about HoTT to a user of proof assistants is that it makes working with "quotients" easier. I put quotients in parentheses because really HoTT is about generalizing the idea of a quotient type. Quotients of sets/algebras are one of the core tools of mathematics and old school type theory doesn't have them so you have to manually keep around equivalence relations and prove over and over again that you respect the relations.

In HoTT, there is an extension of inductive types that allows you to, not just have constructors, but also to impose "equalities" so these generalized "quotients" really have first-class status in the language.

As far as "exciting developments" in HoTT, the big one right now is Cubical Type Theory [1], which is the first implementation of the new ideas of HoTT where Higher inductive types and the univalence axiom "compute" which means that the proof assistant can do more work for you when you use those features. I just saw a talk about it and from talking to people about it, this means that it won't be too long (< 5 years I predict) before we have this stuff implemented in Agda and/or Coq.

Finally, I just want to say to people that are scared off or annoyed by all of the abstract talk about "homotopies" and "cubes", you have to understand that this is very new research and we don't yet know the best ways to use and explain these concepts. I certainly think that people will be able to use this stuff without having to learn anything about homotopy theory, though the intuition will probably help.

[1] https://github.com/mortberg/cubicaltt


HoTT means a lot to me (lol)

HoTT brought dependent types and interactive theorem proving to the masses. Before HoTT, the number of researchers working seriously on dependent type theory was probably < 20. This has now changed, and the field is developing at a much more rapid pace than before.


For someone wanting to begin involving program verification in a practical way to their day-to-day work, do you have any suggestions, resources, or anything?


That depends on your background and your goals.

How much do you know about modern testing, abstract interpretation, SAT/SMT solving? In any case, as of Feb 2017, a lot of this technology is not yet economical for non-safety critical mainstream programming. Peter O'Hearn's talk at the Turing Institute https://www.youtube.com/watch?v=lcVx3g3SmmY might be of interest.


Very little on abstract interpretation and SAT/SMT solving. I'm effectively starting from 0. I'll go through the talk, thanks!

Why isn't it economical yet?


Google "certified programming with dependent types", "program logics for certified compilers" and "software foundations class". Also, work through the Dafny tutorials, here: http://rise4fun.com/Dafny/tutorial

There are some ways in which these tools are not economical. There is currently a big gap. On one side of the gap, you have SMT solvers, which have encoded in them decades of institutional knowledge about generating solutions to formula. An SMT solver is filled with tons of "strategies" and "tactics" which are also known has "heuristics" and "hacks" to everyone else. It applies those heuristics, and a few core algorithms, to formula to automatically come up with a solution. This means that the behavior is somewhat unpredictable, sometimes you can have a formula with 60,000 free variables solved in under half a second, sometimes you can have a formula with 10 that takes an hour.

It sucks when that's in your type system, because then your compilation speeds become variable. Additionally, it's difficult to debug why compiling something would be slow (and by slow, I mean sometimes it will "time out" because otherwise it would run forever) because you have to trace through your programming language's variables into the solvers variables. If a solver can say "no, this definitely isn't safe" most tools are smart enough to pull the reasoning for "definitively not safe" out into a path through the program that the programmer can study.

On the other end of the spectrum are tools like coq and why3. They do very little automatically and require you, the programmer, to specify in painstaking detail why your program is okay. For an example of what I mean by "painstaking" the theorem prover coq could say to you "okay, I know that x = y, and that x and y are natural numbers, but what I don't know is if y = x." You have to tell coq what axiom, from already established axioms, will show that x = y implies y = x.

Surely there's room for some compromise, right? Well, this is an active area of research. I am working on projects that try to strike a balance between these two design points, as are many others, but unlike the GP I don't think there's anything to be that excited about yet.

There's a lot of problems with existing work and applying it to the real world. Tools that reason about C programs in coq have a very mature set of libraries/theorems to reason about memory and integer arithmetic but the libraries they use to turn C code into coq data structures can't deal with C code constructs like "switch." Tools that do verification at the SMT level are frequently totally new languages, with limited/no interoperability with existing libraries, and selling those in the real world is hard.

It's unlikely that any of this will change in the near term because the community of people that care enough about their software reliability is very small and modestly funded. Additionally, making usable developer tools is an orthogonal skill from doing original research, and as a researcher, I both am not rewarded for making usable developer tools, and think that making usable developer tools is much, much harder than doing original research.


> This means that the behavior is somewhat unpredictable, sometimes you can have a formula with 60,000 free variables solved in under half a second, sometimes you can have a formula with 10 that takes an hour.

It sadly also depends a lot on the solver used and the way the problem was encoded in SMT. For a class in college I once tried to solve Fillomino puzzles using SMT. I programmed two solutions, one used a SAT encoding of Warshall's algorithm and another constructed spanning trees. One some puzzles the first solver required multiple hours whereas the second only needed a few seconds, while on other puzzles it was the complete opposite. My second encoding needed on hours for a puzzle which I could solve by hand in literally a few seconds. SAT and SMT solvers are extremely cool, but way incredibly unpredictable.


Absolutely! For another example, Z3 changes what heuristics it has and which it prefers to use from version to version. What happens when you keep your compiler the same but use a newer Z3? Researchers that make these tools will flatly tell you not to do that.

It's frustrating because this stuff really works. Making it work probably doesn't have to be hard, but researchers that know both about verification and usability basically don't exist. I blame the CS community's disdain for HCI as a field.


Thanks for taking the time to write the suggestions and detail the pain points that exist at the moment.

I had heard about Dafny but hadn't seen the tutorial!

> Additionally, making usable developer tools is an orthogonal skill from doing original research, and as a researcher, I both am not rewarded for making usable developer tools, and think that making usable developer tools is much, much harder than doing original research.

When you're saying they're orthogonal, are you effectively saying that researchers generally don't have 'strong programming skills' (as far as actually whacking out code). If so, how feasible would it be for someone who is not a researcher, but a good general software engineer, to work on the developer tools side of things?


There's more I could write about researchers and their programming skills, but to keep it brief: researchers aren't directly rewarded for being good programmers. It's possible to have a strong research career without really programming all that much. However, if you are a strong programmer, some things get easier. You aren't directly rewarded for it though. For an extreme counter-example, consider the researchers that improve the mathematics platform SAGE. Their academic departments don't care about the software contributions made and basically just want to see research output, i.e. publications.

I think that this keeps most researchers away from making usable tools. It's hard, they're not rewarded for making software artifacts, they're maybe not as good at it as they are at other things.

I think it's feasible for anyone to work on the developer tools side of things, but I think it's going to be really hard for whoever does it, whatever their background is. There are lots of developer tool development projects that encounter limited traction in the real world because the developers do what make sense for them, and it turns out only 20 other people in the world think like them. The more successful projects I've heard about have a tight coupling between language/tool developers, and active language users. The tool developers come up with ideas, bounce them off the active developers, who then try to use the tools, and give feedback.

This puts the prospective "verification tools developer" in a tight spot, because there are only a few places in the world where that is happening nowadays: Airbus/INRIA, Rockwell Collins, Microsoft Research, NICTA/GD. So if you can get a job in the tools group at one of those places, it seems very feasible! Otherwise, you need to find some community or company that is trying to use verification tools to do something real, and work with them to make their tools better.


Maybe I'm particularly easy to make happy.

Compilers, in particular optimising compilers are notoriously buggy, see John Regehr's blog. An old dream was to verify them. The great Robin Milner, who pioneered formal verification (like so much else), said in his 1972 paper Proving compiler correctness in a mechanized logic about all the proofs they left out "More than half of the complete proof has been machine checked, and we anticipte no difficulty with the reminder". Took a while before X. Leroy filled in the gaps. I though it would take a lot longer before we would get something as comprehensive as CakeML, indeed I had predicted this would only appear around 2025.

   It sucks when that's in your type system
Agreed, and that is one of the reasons why type-based approaches to program verification (beyond simplistic things like Damas-Hindley-Milner) is not the right approach. Speedy dev tools are vital. A better approach towards program verification is to go for division of labour: use lightweight tools like type-inference and automated testing in early and mid development and do full verification only when the software and specifications are really stable in an external system (= program logic with associated tools).

   making usable developer tools is much, 
   much harder than doing original research.
I don't really agree that the main remaining problems are of an UI/UX nature. The problem in program verification is that ultimately almost everything you want to automate is NP-complete or worse: typically (and simplifying a great deal) you need to show A -> B where A is something you get from the program (e.g. weakest pre-condition, or characteristic formula), and B is the specification.

In the best case, deciding formulae like A -> B is NP-complete, but typically much worse. Moreover, program verification of non-trivial programs seems to trigger the hard cases of those NP-complete (or worse) problems naturally. Add to that the large size of the involved formulae (due to large programs), you have a major theoretical problem at hand, e.g. solve SAT in n^4, or find a really fast approximation algorithm. That's unlikely to happen any time soon.

We don't even know how effectively to parallelise SAT, or to make SAT fast on GPUs. Which is embarrassing, given how much of deep learning's recent successes boil down to gigantic parallel computation at Google scale. Showing that SAT is intrinsically not parallelisable, or even just GPUable (should either be true), looks like a difficult theoretical problem .

    as a researcher, I both 
    am not rewarded for
I agree. But for the reasons outlined above, that is right: polishing UI/UX is something the commercial space can and should do.

   he community of people that care enough 
   about their software reliability is very 
   small and modestly funded.
This is really the key problem. Industry doesn't really care about program correctness that much (safe for some niche markets). The VCs of my acquaintance always tell me: "we'll fund your verification ideas when you can point to somebody who's already making money with verification". For most applications type-checking and solid testing can get you to "sufficiently correct".

You can think of program correctness like the speed of light. You can get arbitrarily close but the closer you get the more energy (cost) you have to expend. Type-checking and a good test suite already catch most of the low-handing fruit that the likes of Facebook and Twitter need to worry about . As of 2017, for all but the most safety critical programs, the cost of dealing with the remaining problems does is disproportionate in comparison with the benefits. Instagram or Whatsapp or Gmail are good enough already despite not been formally verified.

Cost/benefit will change only if the cost of formal verification is brought down, or the legal frameworks (liability laws) are changed so that software producers have to pay for faulty software (even when it's not an Airbus A350 autopilot). I know that some verification companies are lobbying governments for such legislative changes. Whether that's a good think, regulatory capture or something in-between, is an interesting question.

Another dimension of the problem is developer education. Most (>99%) of contemporary programmers lack the necessary background in logic even to think properly about program correctness. Just ask the average programmer about loop invariants and termination order. They won't be able to do this even for 3-line programs like GCD. This is not a surprise as there is no industry demand for this kind of knowledge, and will probably change with a change in demand.


Thanks for the long reply! I don't have the time continue this. I mostly agree with what you say.

I do think that making verification tools easier is something that researchers could and should be thinking about. Probably not verification and logic researchers directly, but someone should be carefully thinking about it and systematically exploring how we can look at our programs and decide they do what we want them to do. I have some hope for the DeepSpec project to at least start us down that path.

I also have hope for type-level approaches where the typechecking algorithms are predictable enough to avoid the "Z3 in my type system" problem but expressive enough that you can get useful properties out of them. I think this is also a careful design space and another place that researchers lose because they don't think about usability. They just say "oh, well I'll generate these complicated SMT statements and discharge them with Z3, all it needs to work on are the programs for my paper and if it doesn't work on one of them, I'll find one it does work on and swap it out for that one." Why would you make a less expressive system if usability wasn't one of your goals?


I'd be interested in your (brief if you have not time) suggestions what kind of novel interfaces you have in mind. I have spent too much time on the command line to have any creative thoughts about radically different interfaces for verification.


>> not yet economical

Don't know much about it, but verum.com claims 50% reduction in development costs.


My field is hypnosis, or more generally, "changework" which is jargon, but essentially hacking the psychology of clients to get desired outcomes.

There's been a renaissance of study in placebo effects, meditation, and general frameworks for how people change belief for therapeutic purposes or otherwise, but to me, that's been going on for a long time and is more about acceptance than being a new development.

One of the most exciting developments that's been coming out recently is playing with language to do what's called context-free conversational change.

Essentially, you can help someone solve an issue without actually knowing the details or even generally what they need help with. It's like homomorphic encryption for therapy. A therapist can do work, a client can report results, but the problem itself can be a black box along with a bit of the solution as well since much of the change is unconscious.

It works better with feedback (a conversation) of course, but often can be utilized in a more canned manner if you know the type of problem well enough.

I'm working on putting together an automated solution that's based on some loose grammar rules, NLP, Markov chains, and anything else I can use to help a machine be creative in language to help people solve their own problems, but as a first step as a useful tool for beginner therapists to help them get used to the ideas and frameworks with language to use.

So essentially, I'm getting a good chunk of the way toward hacking on a machine that can reliably work on people's problems without having to train a full AI or anything remotely resembling real intelligence, just mimicking it.

Before you go thinking, "Didn't they do that with Eliza?" Well yes, in a way, but my implementation is using an entirely different approach.


Your interesting comment made me think of Oblique Strategies cards, which "can help someone solve an issue without actually knowing the details".

https://en.wikipedia.org/wiki/Oblique_Strategies


Thank you. Those are great and I should look at incorporating more of those ideas into the work I'm doing.


Is there a useful introduction reference to 'context-free conversational change'? The Google results did not appear promising.


The place I learned it was hypnosistrainingacademy.com and where he derived his teachings from was a man named John Overdurf who had a program called beyond words. I'd start with Igor's material first. He calls his version (which is based upon, but wholly distinct from beyond words) mind bending language. There are even card decks and training programs to consume. Or free articles if you wish to check it out.


fancy words, exciting claims, and absolutely no detail whatsoever.

with all due respect, said politely, it is my opinion that you are a charlatan.


Thank you for expressing your opinion.

I wasn't interested in long citations or garnering proof of my work in particular with training a machine to do this work. I simply wished to add to this thread and did so, in order to show someone out there, maybe even you, what else is going on that is exciting in my little corner of the world.

I'm not that good of a programmer, so it's not in a state that it does work yet. I hope my original comment didn't suggest otherwise, but let me be perfectly clear here: I have no working machine implementation that can do what I want yet. It can work with simple canned responses like Eliza, but it's not enough. I am working on employing all of the techniques and tools mentioned, but progress is slow.

However, this is work and change I employ daily with my clients professionally and I can assure you that it does work.

You don't even have to take my word for it.

Consider....seriously consider: who would you not be if you weren't you?

If you thought about that one for a sec and felt a little spaced out for a second, you did very well.

If you came up with something quickly like "me" and didn't really actually consider the question, allow me to pose another to you. Again, seriously consider this. Read it a few times. Imagine emphasis on different words each time.

Who are you not without that problem you are interested in solving?

This work can be made more difficult by text only and seriously asynchronous communication, which is why I mentioned it being easier within conversation.

If you are interested in more, google "mind bending language" or "attention shifting coaching" and find Igor Ledochowski and John Overdurf. Their work has helped me change the lives of thousands.


I'm not GP but I'll give it a try:

> You don't even have to take my word for it.

Honest question: how not?

> who would you not be if you weren't you?

Depending on how you parse the sentence, either "someone else" or "that's just a paradox". Essentially the concept of "me" as an entity is fundamentally flawed.

Playing with the meanings of "me" and "not me" in a subjunctive form doesn't make the question very interesting (as in non-trite), to be honest. I guess the intent is not to be fresh but to be thought-provoking or similar, or setting the listener in a certain mindset? Still, sets my mind in the "meh" state.

> Who are you not without that problem you are interested in solving?

I'm not my problems. I'm also not not-my-problems. Actually I am not (I isn't?). I don't see how this helps with anything, though.

Either way, your questions pose (to me) more philosophical thinking (which I already do, anyways) than mindbending or whatever. Maybe my mind is already bent... and I have to say it didn't go very well ;)

A long time ago I came to the conclusion that these questions are merely shortcomings in how language and cognition works. Metaphysics, ontology (and even epistemology) are just fun puzzles with no solution, which I'm ultimately obliged to answer with "who the f--- cares".

Kant was right.

Not that anything you said is directly contradicted by Kant. In fact I'd say it fits very well within the idea that "human mind creates the structure of human experience". It's just never been really useful to me in any way. I really, really, want to know more of (and even believe in) your changework but, often being presented with vague ideas, no one has ever made a solid case on how it isn't, as GP said, charlatanry.


>fancy words, exciting claims, and absolutely no detail whatsoever

This is true of basically every post in this thread. If you're interested in learning more, there are more friendly ways of asking.


No, it's not true of most posts in this thread. Many posts in here have details. Admittedly, not enough to take the audience and make them expert enough to understand the breakthroughs. But enough to show there is something real being described.

Re: friendliness -- I believe I expressed the opinion that someone is a charlatan in as friendly of a manner as is possible.


Sounds interesting. Would you care to share any references?


Most certainly. This current work I am doing builds upon the mountaintops of huge amounts of work built by two men who base their work on yet more huge foundations.

Igor Ledochowski - http://hypnosistrainingacademy.com

John Overdurf - http://JohnOverdurf.com

As far as context-free therapy goes, that's a bit of an advanced subject, but can be learned and mastered through some of their programs.

The key tenets are simple though. As a model, consider that human language builds around 5 concepts: Space, Time, Energy, Matter, and Identity. These 5 also map cleanly to questions (5Ws and H) and language predicates in human language. Space is Where, Time is When, Energy is How, Matter is What, and Identity carries two with Why and Who.

Every problem you've ever had is built up of some combination of the 5 in a specific way, unique to you.

The pattern of all change is this:

1) Associate to a problem, or in other words, bring it to mind.

2) Dissociate from the problem, or basically get enough distance from it so that you can think rationally and calmly. Similar to a monkey not reaching for a banana when a tiger is running after it, your brain does not do change under danger and stress well. It can, but that usually leads to problems in the first place.

3) Associate (think about, experience) a resource state. Another thought or experience that will help with this one, for example if someone were afraid of clowns, I'd ask a question like, "What clowns fear you?" It usually knocks them out of the fear loop for a second.

4) While thinking about the resource, recall the problem and see how it has changed. Notice I said has changed. It always changes. You can never do your problem the same again. Will this solve things on the first go? Maybe. Maybe not, but it's enough to get a foothold and a new direction and loop until it's done.

Which is what makes this fun and exciting to do in person and fun and exciting to help teach a machine to mimic it to.


It seems like the missing step, right between 3 and 4, is "and then a miracle occurs."

That's why I made my original comment. Maybe you're not a charlatan, in which case I'd have to conclude you're thinking irrationally and have been deceived by some form of magical thinking.

You have not proposed any mechanism by which these steps can form a consistent treatment for problems that individuals have struggled with for years. You've merely declared that it will, and a whole lot of faith is required.

Other posts in this thread mostly propose a mechanism, even if we readers don't have the prerequisites to fully understand it. For example, consider the proposal that machine learning could be applied to the mundane tasks a radiologist performs. It may or may not pan out, but it has a basis.


I come to changework (coaching) from a slightly different direction; I'll share what I see as underlying mechanisms in case it helps.

Basically, what we do is based on how we see things. If we can change how we see things, then new actions & results become available.

Then the question just becomes, how can we change how we see things.

If how we see something comes from what we've experienced, then introducing a new experience can have us see it differently.

If how we see something comes from what we think about it, then we can introduce a new thought about it.

The point being to change the internal mental model related to the thing, so that we see it differently, we experience it differently, it occurs for us differently than it did before.

In the case above, step 3 introduced a new thought and internal experience related to the thing, and thus the step between 3 and 4 is, "their internal mental model, connected to the thing, changed".

Again, the mechanism (and the missing step) becomes, "change how we see & experience something, change our internal model relating to it". And then, some possibilities for triggering that include having a new thought about it, having a new experience about it; and various techniques can exist for introducing those experiences or thoughts.

At least, that's how I see it (how it occurs for me, how I've experienced it).


There must be a word distributed analogously to "charlatan" which describes your role here, but it escapes me at the moment.


Not really my main field, but in web technology it seems that severless architectures such as Amazon Lambda will be a pretty big game changer in the near future:

Lambdas are lightweight function calls that can be spawned on demand in sub-millisecond time and don't need a server that's constantly running. They can replace most server code in many settings, e.g. when building REST APIs that are backed by cloud services such as Amazon DynamoDB.

I've heard many impressive things about this way of designing your architecture, and it seems to be able to dramatically reduce cost in some cases, sometimes by more than 10 times.

The drawback is that currently there is a lot of vendor lock-in, as Amazon is (to my knowledge) the only cloud service that offers lambda functions with a really tight and well-working integration with their other services (this is important because on their own lambdas are not very useful).


I have to admit, I'm pretty bearish when it comes to serverless. Mostly because it's an abstraction which leaks to hell and back.

Your input is tightly restricted, and with Amazon in particular, easy to break before you even get to the Lambda code (the Gateway is fragile in silly ways). Your execution schedule is tightly controlled by factors outside your control - such as the "one Lambda execution per Kinesis shard". You can be throttled arbitrarily, and when it just fails to run, you are limited to "contact tech support".

In short, I can't trust that Lambda and its ilk are really designed for my use cases, and so I can only really trust it with things that don't matter.


I'm bearish on it right now, though conceptually it's a fantastic idea which just has quite a way to go before it's ready for prime time. I definitely think a lot of people have jumped the gun by pushing serverless before it's really ready for the outside world.


Azure has Functions too: https://azure.microsoft.com/en-gb/services/functions/

Auth0 has Web Tasks: https://webtask.io/

Am sure there are many more implementations out there. Agree that vendor lock-in is always a concern.


To avoid vendor lock in there is Fission that runs on Kubernetes: http://fission.io/


>can be spawned on demand in sub-millisecond time

But the reality is that they don't, with cold-start times upward of 30 seconds. If you use them enough to avoid the cold-start penalties, then you're better of with reserved instances because lambdas are 10x the price. If you can't handle the 30 second penalty then you're better off with reserved instances because they're always on. If you have rare and highly latency-tolerable events, then use lambda.


To add, you don't really need that many lambda call for it to be the same price to just have a small always running instance. You can still use it lambda style if you wish, with automatic deployment.


I am pretty bullish on "serverless". I really do think it's the future. It fulfills the vision of Cloud Computing. But it's early days and I wouldn't yet bet the ranch on it. I am doing a new project with Azure Functions and so far am quite happy with the offering.


I don't know, to me it looks like serverless benefits, (ie theoretical lower cost) are not worth the downsides, (essentially complete vendor lockdown), but would love to hear why am wrong :-)



If you go serverless - specifically AWS Lambda, then you must be comfortable with using old and out of date programming environments, as these are what AWS Lambda supports.

There is no cutting edge with serverless on AWS.


In the Bitcoin space, I'm most excited about the Lightning Network [0][1] and MimbleWimble [2][3], which are in my view the two most groundbreaking technologies that really push the limits of what blockchains are capable of.

[0] https://en.bitcoin.it/wiki/Lightning_Network

[1] https://lightning.network/

[2] https://download.wpsoftware.net/bitcoin/wizardry/mimblewimbl...

[3] https://bitcoinmagazine.com/articles/mimblewimble-how-a-stri...


To add to that, the escalating hashrate war within Bitcoin between Unlimited and Core is popcorn worthy.

And within the wider space of blockchains, improving access to strong anonymization techniques appears to be moving forward quickly: https://blog.ethereum.org/2017/01/19/update-integrating-zcas...


What's the unlimited thing?


It has to do with the need for increased block sizes. Right now, each block (chunk of validated transactions) can only be 1MB in size. This restricts the total throughput of the network, but keeps the total size of the blockchain down and the growth rate low.

The original expectation was to gradually increase the block size to increase capacity as more users joined the network, eventually transitioning most users to "thin" clients that don't store the (eventually enormous) complete blockchain.

The Core devs right now feel that the current situation (every node a full peer with the complete chain, but maxed out capacity and limited throughput) is preferable for a number of reasons including decentralization, while the Unlimited devs feel that it's time to increase the block size in order to increase capacity and get more users on the network, among other things.

Decisions like this are usually decided by the miner network reaching consensus, with votes counted through hashing power/mined blocks. I'm not sure where things stand at the moment, but it's been interesting to observe.

I understand it's become a rather contentious topic in the community.


Thanks for the great explanation! So where does segwit come into this?


I'm not the best person to ask, and I don't fully understand segwit, but I think it's the Core devs (partial) solution to the problem of scaling up the network without increasing block size.

IIUC, segwit makes certain kinds of complicated transactions easier to handle (ones with lots of inputs/outputs), possibly allowing more transactions to fit in less space, and lays useful groundwork for overlay networks like Lightning. I think the thinking is that overlay networks can be fast, and eventualy reconcile against the slower bitcoin network.

Unlimited would rather just scale up the bitcoin network in place, instead of relying on an overlay network.

You'd probably get better information from bitcoincore.org and bitcoinunlimited.info, or the subreddits /r/bitcoin and /r/btc (for core and unlimited, respectively, they split after moderator shenanigans in /r/bitcoin).


New aesthetics in web design.

With the brutalist movement something new started. People went back to code editors to create websites by hand skipping third-party, non-web-native user interface design tools prefilled with common knowledge making websites looking uniform.

The idea of design silos and brand-specific design thinking is dropped: no more bootstrap, flat design, material design, etc.

It's like back to the nineties and reinventing web design. You start from scratch, on your own, and build bottom up without external influence and or help.

It's about creativity vs. the bandwagon, about crafting your own instead of putting together from popular pieces.

http://brutalistwebsites.com


Isn't it just a fad? How usable and readable is the brutalist design? Or what are you trying to maximise other than being different?


A fad? Don't you see a problem with many of today's websites? Both from a developer's and a user's perspective.


Sure, many "modern" sites have terrible UX, but that doesn't mean that minimalist or "brutalist" designs are intrinsically any better.


At least they load quickly. They don't hog you CPU. And they don't mess what you can Ctrl+F.

All a great boon for UX, while being easier to design.


B specific. Answer the question above, name the problem you have on mind and tell us how is brutalism solving that problem.


Did not downvote. Let's take twitter as a well known and particularly bad example.

For a brutalist example, let's take HN. It works. Plus, I'm sure the code is quite simple.


From skimming the thumbnails in the linked site, I don't know that I'd call HN or reddit brutalist designs, maybe "classic" or "skeletal".

The emphasis is on dense content with simple links, and there's not a lot of "live" interactive content on the page. I don't find either site to be particularly ugly or visually offensive, contrary to many of the linked "brutalist" sites.

I'd love to see more sites in the HN/reddit model (here's hoping reddit's coming desktop redesign doesn't lose that), but I wouldn't want to actually use more brutalist sites (outside of individual creative expression, anyway).


I didn't coin the term and I don't like it -- but hacker news is actually listed on the linked site.


I think Craigslist is probably a good example of brutalism. But that website you provided doesn't even have a mobile version, which comes across as inexperienced, not "minimalist"


[This] site is my favorite in this regard:

[This][https://ruudvanasseldonk.com/]


What's so "brutal" about this website? Looks like perfectly normal blog website to me.


Aerospace Engineer - Enhanced Flight Vision Systems

TLDR: Fancy fused infrared (LWIR/SWIR) and visible spectrum camera systems may 'soon' be on a passenger airliner near you.

Using infrared cameras to see through fog/haze to land aircraft has been happening for a while now, but only on biz jets or on FedEx aircraft with a waiver. The FAA has gained enough confidence in the systems that they have just opened up the rules to allow these camera systems to be used to land on passenger aircraft.

Combine that with the fact that airports are transitioning away from incandescent lights to LEDs (meaning a purely IR sensor system is not longer enough), and you get multi-sensor image fusion work to do and a whole new market to sell them to.

Here is a blog post (from a competitor of ours) talking about the new rules.

https://blogs.rockwellcollins.com/2017/01/17/worth-the-wait-...


That sounds very interesting!

Say with a car that has a heads up display for night vision, if it had an SWIR sensor and IR lights, can that cut through fog too? Or is it the LWIR that is able to do that?


SWIR sensors are there for hot burning lights. LWIR (aka thermal) sees things that are every day temperatures. Both wavelengths have better transmittance through fog than visible light so we say those sensors can 'see through' fog. The physics comes down to the wavelength of the light vs the size of the molecules of the medium the light is trying to get through [1].

Another fun part is that fog at one airport can be different than fog at another, so while the weather conditions at both locations may say visibility is "Runway Visible Range (RVR) 1000ft", that is for a pilots eyes, and the same camera may work just fine at one location and not at all at the other.

[1] https://upload.wikimedia.org/wikipedia/commons/e/e9/Atmosphe...


The era of gravity wave astronomy is starting to begin in earnest with LIGO's new run on data collection. It'd be offline getting upgraded from 2016/02 to 2016/11 and is now even more sensitive [http://www.ligo.org/news/]


The convergence of 3 big ideas in graph computing:

1. D4M: Dynamic Distributed Dimensional Data Model

http://www.mit.edu/~kepner/D4M/ GraphBLAS: http://graphblas.org

Achieving 100M database inserts per second using Apache Accumulo and D4M https://news.ycombinator.com/item?id=13465141

MIT D4M: Signal Processing on Databases [video] https://www.youtube.com/playlist?list=PLUl4u3cNGP62DPmPLrVyY...

2. Topological / Metric Space Model

Fast and Scalable Analysis of Massive Social Graphs http://www.cs.ucsb.edu/~ravenben/temp/rigel.pdf

Quantum Processes in Graph Computing - Marko Rodriguez [video] https://www.youtube.com/watch?v=qRoAInXxgtc

3. Propagator Model

Revised Report on the Propagator Model https://groups.csail.mit.edu/mac/users/gjs/propagators/

Constraints and Hallucinations: Filling in the Details - Gerry Sussman [video] https://www.youtube.com/watch?v=mwxknB4SgvM

We Really Don't Know How to Compute - Gerry Sussman [video] https://www.youtube.com/watch?v=O3tVctB_VSU

Propagators - Edward Kmett - Boston Haskell [video] https://www.youtube.com/watch?v=DyPzPeOPgUE


So many good links here. Most interested in Dynamic Distributed Dimensional Data Model.

What are you working on?


PUFR http://pufr.io (IoT security startup), and for the last few years I've been doing R&D on the design of a graph computing model that unifies some of the ideas above.


As an Android developer, I'm most excited about instant apps. If it works as marketed, you won't have to hold on to the apps which you use maybe once or twice a week. Instead, you'll be able to download the required feature/activity/view or whatever else on the fly.

I'm not sure I did justice to instant apps, because there's a language barrier playing in. But here's an example: I use the Amazon app maybe once every 2 weeks, and yet it's one of the apps consuming most amount of memory on my phone due to background services. After Amazon integrates instant apps, I'll be able to delete the app, and just google search for the product through my phone. The Google search will then download the required page as an app, giving me the experience of an app, whilst not even having it on the phone.


This is going to sound really naive... Isn't that just a website?


Here's a really good overview: https://www.youtube.com/watch?v=cosqlfqrpFA

Also, to answer your question, not it's not the same as a website because it will be a native Android app with the ability to communicate with the Android OS, like any other Android app.

The possibility of things — in terms of improved UX — that you can accomplish with instant apps are infinite. It all comes down to how you want to use it.


I watched the video and honestly, this is terrible.

If I'm clicking on a link I want to open it with my browser, not with some app. I find this extremely annoying with facebook and even the news carousel already.

I can't open new tabs, copy the url, switch to other tabs like I would in the normal browser. This is extremely confusing and I don't how this benefits me in any way.


I couldn't agree more. I'm excited about the idea of streaming apps, but the execution here is terrible. How do you control which url opens which app? If somebody sends you a reddit or hn link, which app does it run? There are dozens out there for both! The whole point of the app is not to have to manage these things, but the only way I can see this working is if you have yet another area in settings to manage which apps open for which links.

A better implementation would have been to have a popup with a list of compatible apps to run, including an option to run it in a browser like any normal link.

I really hope the NFC bit is opt-in by default. I don't want to have to manually disable it every time I get a new phone. In fact, even if I've opted into having the SF Park app run when I'm near a parking meter, I want the option to "reject" it just like I do when I get an incoming phone call.


> How do you control which url opens which app?

The website itself specifies which app should be used by publishing a Digital Asset Links file. (https://developer.android.com/training/app-links/index.html)


I like that even less. If you haven't manually added an app association, it defaults to opening the app specified in the digital assets file without any notification to the user. This is the opposite of a sane default. The first time an app wants to run, it should always let the user decide whether they want to run the app or continue using the browser. Otherwise, this is a recipe for malware.


The malware developers are DEFINITELY looking forward to this capability.


BTW You can do this kind of thing now, with classloading. I'm doing developmemt on my phone, and it's far faster to load new class versions than install a new app version. Google will have a framework around this.

Full OS access could mean permissions per page - could be awkward or ok. Much of the app vs. webpage debate here is the same as always - though offline advantage is gome.


Never looked into Classloader until now. Very interesting. Can a loaded class also be an extension of a view or an activity?


I'm loading a GLSurfaceView, which extends SurfaceView, which extends View. So, yeah.

There shouldn't be any problem classloading an Activity, use reflection to instantiate, and treat as you would a runtime Activity (as opposed to being declared in AndroidManifest.xml). But I haven't actually done this; could be some gotchas in incorporating runtime a Activity into the GUI.

IIRC google had a few hits on classloading Activities.


ActiveX for Android


It's still a native app, you just don't have to explicitly download it, or download the whole thing. It still runs native code and can take advantage of Android-specific features.


so we're back to applets


I'm curious about the "just a website" bit as well. It's slow going, but new features like service workers, web workers, web sockets, webrtc, seem to be closing the gap between "website" and app.

Is there some point where websites start to significantly displace apps?


> Is there some point where websites start to significantly displace apps?

Yes. It happened about ten years ago.


Ah, yes. Guess I should have said iphone / android native apps, especially ones that depend on network data such that the native app wouldn't be any faster than a web site.

I don't get the appeal, for example, of native apps for things like airlines, amazon, ebay, etc.


Speed, responsiveness and the ability to work offline. These things sort of work for webapps on the desktop because desktops aren't cpu and memory restricted. Phones are. As a result, webapps are just too damned slow and frustrating to use on a phone.


"Is there some point where websites start to significantly displace apps?"

It seems to me that this is already slowly happening and this instant app thing is the reaction. After all, Google would lost the control if everybody started to use the browser.


I do electronic music. The rise of platforms like Bandcamp and Patreon, and the abundance of high quality free/inexpensive tools and guides is raising the bar for quality in independent music, and making it easier for more people to get paid in whatever niches they prefer (vs. going for a mass audience).


"The rise of platforms like Bandcamp and Patreon, and the abundance of high quality free/inexpensive tools and guides is raising the bar for quality in independent music"

No easy solution here, what is the ^Bandcamp of concert venues^? [0] Is there a venue problem of, "Where do you play?"

[0] I know the solution is a political one due to land usage, sound restrictions and venue size.


I wwould love to get started, I've been looking for a new hobby and thought that is perfect. How should I start? :)



You're describing the DAW and plugins, but wouldn't you need a midi keyboard, a groovebox or a drum computer, to actually make those beats?

https://www.thomann.de/gb/roland_tr_8.htm


Nope. I have a MIDI keyboard, but it doesn't work half the time, so I rarely bother with it. You can do everything inside the DAW with the MIDI roll or notation editor.


I'd recommend selecting a DAW and learning it. Really learn it. We have countless great tools in music production available - it mostly comes down to how well you can handle them.

Personally, I love and recommend Ableton Live which features an easy to use interface, workflow, lots of options for experimentation and extensions, great and large community as well. Good choice for beginners and experts alike. Plus, with Ableton Push you have the option to get an excellent hardware controller that is tailored to your DAW, but it isn't something you'd need from the start.

Alternatively, you almost can't beat Logic on price considering its features and performance. I'd say it is more complex, but that's subjective.

Both Logic and Live (Suite version) offer a complete solution, including high quality instruments, synths and effects.

Hardware is optional, but a simple midi keyboard for less than a 100 bucks will help a lot.


I plan to switch to Ableton once I earn enough from music to pay for it, but that's because it's made for the kind of music I do. Everyone I know uses it, so it's easy to find advice and tutorials.

For now, Reaper and a few free VSTs will do. I find myself bumping up against the fact that Reaper was made for live music and its devs are understandably keeping the focus there, even though they do good work on the MIDI roll. They always nail down a few irritants in each release.

You'll go further and have an easier time if the community around your tools makes the same kind of music. Choosing tools mostly comes down to what you want to do. If you want to do electronic, Ableton is a good bet.


While I'm positive you know that, don't make the mistake of limiting Ableton to electronic music - it has obvious strengths in that department, but there's really no limit to the genre and style of what can be done with DAWs like Live, Logic and Co.

By the way, if Ableton remains too expensive for your taste, there's always Bitwig, which isn't quite as mature and has a much smaller, but growing community, yet it's very similar to Ableton's approach to music production.


I'll be sure to demo both when I get to the point where I can afford a switch.


Safe selective destruction of cells via their internal chemistry, not surface markers, via uptake of lipid-encapsulated programmable suicide gene arrangements.

With the right program and a distinctive chemistry to target in the unwanted cell population, this flexible technology has next to no side-effects, and enables rapid development of therapies such as:

1) senescent cell clearance with resorting to chemotherapeutics, something shown to extend life in mice, reduce age-related inflammation, reverse measures of aging in various tissues, and slow the progression of vascular disease.

2) killing cancer cells without chemotherapeutics or immunotherapies.

3) destroying all mature immune cells without chemotherapeutics, an approach that should cure all common forms of autoimmunity (or it would be surprising to find one where it doesn't), and also could be used to reverse a sizable fraction of age-related immune decline, that part of it caused by malfunctioning and incorrectly specialized immune cells.

And so forth. It turns out that low-impact selective cell destruction has a great many profoundly important uses in medicine.


For 3) does destroying all mature immune cells also get rid of all immunities that the patient has gained throughout life from vaccines, previous illness, etc? Would it make the patient very fragile, not to have gone through gaining those immunities at a young age?


Revaccination, yes, definitely necessary in the idealized case of a complete wipe of immune cells. But that's a small problem in comparison to having a broken immune system. Just get all the vaccinations done following immune repopulation.

Part of the problem in old people is that they have too much memory in the immune system, especially of pervasive herpesviruses like cytomegalovirus. Those memory cells take up immunological space that should for preference be occupied by aggressive cells capable of action.

Another point: in old people, as a treatment for immunosenescence, immune destruction would probably need to be paired with some form of cell therapy to repopulate the immune system. In young people, not needed, but in the old there is a reduced rate of cell creation - loss of stem cell function, thymic involution, etc. That again, isn't a big challenge at this time, and is something that can already be done.

At present sweeping immune destruction is only used for people with fatal autoimmunities like multiple sclerosis because the clearance via chemotherapy isn't something you'd do if you had any better options - it's pretty unpleasant, and produces lasting harm to some degree. Those people who are now five or more years into sustained remission of the disease have functional immunity and are definitely much better off for the procedure, even with its present downsides, given where they were before. If the condition is rhematoid arthritis, however, it becomes much less of an obvious cost-benefit equation, which is why there needs to be a safe, side-effect free method of cell destruction.


How does the targeting work , and is it only inside the cell or also outside of it ?


From one of the companies working with an implementation:

----------

"Our approach is quite different from most other attempts to clear these cells. We have two components to our potential therapy. First, there is a gene sequence consisting of a promoter that is active in the cells we want to kill and a suicide gene that encodes a protein that triggers apoptosis. This gene sequence can be simple, like the one that kills p16-expressing cells, or more complicated, for example, incorporating logic to make it more cell type specific. The second component is a unique liposomal vector that is capable of transporting our gene sequence into virtually any cell in the body. This vector is unique in that it both very efficient, and appears to be very safe even at extremely high doses."

"There's a subtle but profound distinction between our approach and others. The targeting of the cells is done with the gene sequence, not the vector. The liposomal vector doesn't have any preference for the target cells. It delivers the gene sequence to both healthy and targeted cells. We don't target based on surface markers or other external phenotypic features. We like to say "we kill cells based on what they are thinking, not based on surface markers." So if the promoter used in our gene sequence (say, p16) is active in any given cell at the time of treatment, the next part of our gene sequence - the suicide gene - will be transcribed and drive the cell to apoptosis. However, if p16 isn't active in a given cell, then nothing happens, and shortly afterwards the gene sequence we delivered would simply be degraded by the body. This behavior allows our therapy to be highly specific and importantly, transient. Since we don't use a virus to deliver our gene sequence, and our liposomal vector isn't immunogenic, our hope is that we should be able to use it multiple times in the same patient."


How do you propose to infect cells deep inside solid tumors?How do you target cells that have lost cell surface markers? Your example, p16, is used in many cells intermittently, or only in certain microenvironments . How do you deliver? IV, topical, tumor injection (if so, what about needle tract seeding)?


I think web assembly is the piece most likely to change front end development in a meaningful way. A little hard to see now, as the WASM component has no direct access to the DOM, no GC, and no polymorphic inline cache. So, dynamic languages are hard to do with WASM. Once those gaps are closed, however, it should be interesting to see if javascript remains the lingua franca or not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: