HackerNews is very developer-focused. If you guys saw what a radiologist does on a 9-5 basis you'd be amazed it hasn't already been automated. Sitting behind a computer, looking at images and writing a note takes up 90% of a radiologist's time. There are innumerable tools to help radiologists read more images in less time: Dictation software, pre-filled templates, IDE-like editors with hotkeys for navigating reports, etc. There are even programs that automate the order in which images are presented so a radiologist can read high-complexity cases early, and burn through low-complexity ones later on.
What's even more striking is that the field of radiology is standardized, in stark contrast to the EMR world. All images are stored on PACS which communicate using DICOM and HL7. The challenges to full-automation are gaining access to data, training effective models, and, most importantly, driving user adoption. If case volumes continue to rise, radiologists will be more than happy to automate additional steps of their workflow.
Edit: A lot of push back from radiologist is in regards to the feasibility of automated reads, as these have been preached for years with few coming to fruition. I like to point out that the deep learning renaissance in computer vision started in 2012 with AlexNet; this stuff is very new, more effective, and quite different than previous models.
Curious to know what sort of methods you used then if you don't mind sharing.
The best results were with backpropagation neural networks: http://www.sciencedirect.com/science/article/pii/0888613X949...
But we also used fuzzy logic neural networks with genetic algorithms: http://ieeexplore.ieee.org/document/712156/?reload=true
It were 24 years ago.
There might be some interesting things that can be learned from this kind of info and applied to the current status quo (I'm definitely not arguing that there is a sociopolitical element).
Maybe if MRI scans get cheap enough (due to advances in cheap superconductors or whatever) that it's economically feasible to scan people regularly as a precautionary measure (rather than in response to some symptom), then the bulk of the cost might then be in having the radiologist look at the scans. In those "there's nothing wrong but lets check anyways" cases, it might be better to just have the AI do it all even if its accuracy is lower, if it represents a better health-care-dollar-spent to probability-of-detecting-a-serious-problem ratio. (If the alternative is to just not do the scan because the radiologist's fees are too expensive, then it's better to have the cheap scan than nothing at all.)
I can see an argument that if the company was sued then it could try to push the blame onto the software vendor, but surely that would be decided based on the contract between company and software vendor, which is usually defined by the software license.
Machine learning is already used in Radiology. Chances are eventually Radiology will be the domain of machines. But it's going to take some time to get there. Healthcare is extremely regulated and closed minded.
Most of the people in the thread you listed above are clearly biased towards medicine and against computer science and machine learning. But machine learning has been having success in diagnostic medicine even well before the deep learning boom that thread talks about.
- Some RETS recorded requests/responses: https://github.com/estately/rets/blob/master/test/vcr_casset...
(Basically some XML-based (SOAP?), cookie + authorization, that seems very ASP.net / Windows Server centric)
- DICOM ("It includes a file format definition and a network communications protocol"): https://en.wikipedia.org/wiki/DICOM
It's basically how to imaging devices communicate and store the images.
Image examples: http://www.osirix-viewer.com/resources/dicom-image-library/
A video of what a doctor would see: https://www.youtube.com/watch?v=Prb5lcR8Jqw
TCP-based protocol in Wireshark: https://wiki.wireshark.org/Protocols/dicom
I wrote a little .dcm to .jpg based on ruby-dicom BTW: https://gist.github.com/Dorian/9e3eb5891b49926c15a05c641ffef...
- PACS seems just like a database model basically http://mehmetsen80.github.io/EasyPACS/
It's the server that is gonna give the info to the doctors.
It's interesting how there seems to be only one popular viewer: OsiriX
Main issue is with HL7 is not technical. From a business point of view, the incentive to cooperate with other systems via HL7 means another reason for a department to adopt a system other than yours.
These are examples of next-generation radiology companies. The current generation of products are focused on image storage and display. These new companies offer automated image analysis before the radiologist even looks at the image. iSchemaView does hemorrhage maps as soon as new head CT or head MRI is acquired.
It looks like everybody sitting on their data is hindering progress. Is there anything that can be done about that politically? I mean, in many cases the data belongs to the public anyway, unless people signed a waiver, but what is the legality of that?
I'm sure machines will someday take over radiology but there will be many, many jobs automated before it (i.e. decades).
There are three areas that take a lot of time that radiologist would like to see automated:
1. Counting lung nodules.
2. Working mammography CAD.
3. Automated bone-age determination.
Those are the hot three topics for machine learning. Personally, I think that a normal vs. non-normal classifier for CXRs would be more interesting because you could have a completely generated note for normal reads, and radiologists could just quickly look at the image without writing/dictating anything. Of note, hospitals and radiology departments typically lose money on X-ray reads because the reimbursement is $7-$20 (compared to $100+ for MR/CT). So if you could halve the read time, they might become profitable again.
Edit: In terms of 10x, what you'd want is a system that would automatically make the reads (i.e. radiologist report), and a very efficient way for radiologist to verify what is written. It's hard to make a pathologic read, but since roughly 50% of reads are normal, you could start with normal reports.
And then bringing checklist driven analysis for radiologist.
So I decided to observe an radiologist at an hospital for a day (back in 2011), and I noticed that most if not all of it was already automated.
Radiologists were there to rubber-stamp the machine's work (and to ensure compliance, laws and what not).
As an example, one survey (https://ashleynolan.co.uk/blog/frontend-tooling-survey-2016-...) put the number of developers who don't use any test tools at almost 50%. In the same survey about 80% of people stated their level of JS knowledge was Intermediate, Advanced or Expert.
We're currently working on a way to help devs test web app functionality and complete user journeys without having to actually write tests in Selenium or whatever. The idea to is let devs write down what they want to test in English ("load the page", "search for a flight", "fill the form", "pay with a credit card", etc), then we'll use NLP to discern intent, and we have ML-trained models to actually execute the test a browser.
You can give us arbitrary assertions, but we also have built-in tests for the page actually loading, the advertising tags you use, page performance, some security stuff (insecure content, malware links). At the end we hand you back your test results, along with video and perf stats. It’s massively faster than writing Selenium, and our tests won’t break every time an XPATH or ID changes.
>tries to do so by using an imprecise, context-dependent language designed for person-to-person communication to instruct a machine
Selenium is its own can of worms, but it absolutely sounds like you're using the wrong tool for the job here. The problem stopping people from writing browser-based tests is not that people can't understand specific syntaxes or DSLs, it's actually the opposite: people don't have a good, reliable tool to implement browser-based testing in a predictable and specific way that does what a user would intuitively expect.
Whatever the right answers to a next-gen Selenium are, attempting to guess the user's meaning based on Real English by something that is itself an imperfect developing technology like NLP is pretty obviously not the correct toolkit to provide that. Remember, a huge amount of the frustration on Selenium comes from not having the utilities needed to specify your intention once and for all -- the ambiguities of plain English will not help.
If your thing works, it will have to end up as a keyword based DSL like SQL. SQL is usually not so scary to newcomers because a simple statement is pretty accessible, not having any weird symbols or confusing boilerplate, but SQL has a rigid structure and it's parsed in conventional, non-ambiguous ways. "BrowserTestQL" (BTQL) would need to be similar, like "FILL FORM my_form WITH SAMPLE VALUES FROM visa_card_network;"
The biggest piece that's missing in Selenium is probably a new, consistent element hashing selector format; each element on the page should have a machine-generated selector assigned under the covers and that selector should never change for as long as the human is likely to consider it the "same element". The human should then use those identifiers to specify the elements targeted. I don't know how to do that.
The second biggest piece that's missing from Selenium is a consistent, stable WebDriver platform that almost never errors out mid-script; this may involve in some type of compile-time checking against the page's structure or something (which I know is hard/possibly impossible because of JS and everything else).
And whether or not it gets that data is a unit test in another place.
Testability isn't the domain of the view layer.
Abstracting the DOM into a declarative DOM is great for performance, but doesn't lead to necessarily more testable code.
- Test recorders that aren't a great experience and output incomprehensible, brittle tests.
- Test composers that I can best describe as 90's SQL query builders for Selenium.
Complex JS apps are still a challenge for us (especially with some of the WTF code we come across in the wild), but we have a strategy in the works for them. We're still pre-release though. If you're interested, send me an email (email@example.com) and I'll add you to our alpha list.
That is very often the case. It needs to change. Testing is a part of software development, and anyone who writes software should be aware of it. I feel the same way about documentation. And requirements. You can't write good software without knowledge of the processes that surround development. It isn't enough just to be able to write great code.
If you spend more time writing / running tests that you would fixing the bugs they find, you may be doing it wrong. If you're writing documentation no one will read, you may be doing it wrong.
They clearly do have a place though. As for maintaining a set of requirements... I appreciate there must be some environments where what is required is well understood and relatively stable. I'm not quite sure if I should look forward to working in such a place or not!
Actually there isn't. Every project, no matter how it's managed, changes as it goes on. It has to because you learn and discover things along the way. That's why maintaining and understanding project requirements and how they've changed is incredibly important. If you don't keep on top of them then you end up with a project that wanders all over the place and never finishes. Or you build something that misses out important features. Or the project costs far too much. Requirements are not tasks, or epics, or things you're working right not. They're the goals that the tasks and epics work towards.
(My first startup was a requirements management app.)
How did that work out? In the 90's it seemed every industry was switching to Documentum for that sort of thing.
Why should those 2 activities be compared? They do not compare: writing/running tests is about discovering the bug, not fixing it. You still need to fix it after you have done your testing activity.
The time spent writing/running tests should better be compared to the time spent in bug discovery without tests, i.e. how much you value the fact that your users are going to undergo bugs, what the consequences of the users hitting bugs are, what the process to report them is, etc.
I don't think thats the definition of a junior developer. Test tools are apart of building software, you should be hiring devs that have created projects that use tests of some sort, if not with the technology you're using.
>I expect that junior developer in software field should be able to program only.
I don't know how you can have little to no dev experience and know how to program.
Developer need to know development cycle, automated testing, continuous integration, software life cycle, ticketing systems, source control systems, branching and merging, cooperating, etc.
Programmer need to know programming languages, patterns, algorithms, computer internals, effectiveness, profiling, debugging, etc.
Junior developer (in software filed)) has no or little experience in development, so junior developer is almost equal to a programmer, which causes lot of confusion.
If by 'testing' you really mean 'unit testing', as I suspect most junior engineers who claim testing experience do, then hope is already lost. The one saving grace is that there is enough churn in webdev that nothing lasts long enough to reveal how fragile it is.
That said, I'll be thrilled if React Native gives rise to higher quality apps in situations where a native app is unavoidable (e.g. my bank's app).
Five years ago native apps made a level of UX possible that was unheard of on the web, to say nothing of mobile. But today not only has HTML/js closed the gap, but whiz-bang native animations aren't impressive just on account of being novel anymore.
I think these are the future. Once they catch on with mainstream consumers, native apps won't stand a chance against the convenience of simply visiting a website to install/use. Plus, on the developer end, we finally have a true "write once, run anywhere" situation that doesn't involve any complex toolchains or hacky wrappers.
But do you think that Apple would embrace this technology, given that e.g. their app-store is generating lots of revenue?
99% of web apps need the same features but most of this is still up to manually rolling your own.
I should be able to clone some repo, enter some DO/AWS/GOOG keys and push.
This makes the complexity problem much easier to solve, as the code is (should be) less likely to cause an unanticipated mutated state which can't be easily tested for.
I suspect the reality is a small subset of programmers think functional programming is amazing and everyone else hates it. You might think it reduces complexity, but a lot of people feel it reduces comprehensibility.
Yep. It's not that I hate it, I just don't like it. The thing is that the functional-praisers are much more vocal about how they love it whereas people who write imperative do not care much about Haskell.
We are happy with LINQ and that's all 99% of us wants/needs.
Because nothing outside of the function can be changed, and dependencies are always provided as function arguments, the resulting code is extremely predictable and easy to test, and in some cases your program can be mathematically proven correct (albeit with a lot of extra work). Dependency injection, mocks, etc are trivial to implement since they are passed directly to the function, instead of requiring long and convoluted "helper" classes to change the environment to test a function with many side effects and global dependencies. This can lead to functions with an excessively long list of parameters, but it's still a net win in my opinion (this can also be mitigated by Currying).
A side-effect (hah) of this ruleset is that your code will tend to have many small, simple, and easy to test methods with a single responsibility; contrast this with long and monolithic methods with many responsibilities, lots of unpredictable side effects that change the behavior of the function depending on the state of the program in its entirety, and which span dozens or hundreds of lines. Which would you rather debug and write tests for? Tangentially, this is why I hate Wordpress; the entire codebase is structured around the use of side-effects and global variables that are impossible to predict ahead of runtime.
There is much, much more to functional programming (see Monads and Combinators), but if you don't take away anything else, at least try to enforce the no-side-effects rule. A function without side-effects is deterministic; i.e. it will always give you the same output for any given set of inputs (idempotence comes for free). Because everything is a function, functions are first-class citizens, and there are only a few simple data structures, it becomes easy to chain transformations and combine them by applying some of the arguments ahead of time. Generally you will end up with many generalized functions which can be composed to do anything you require without writing a new function for a specific task, thus keeping your codebase small and efficient. It's possible to write ugly functional code, and it's possible to write beautiful and efficient object-oriented code, but the stricter rules of functional style theoretically make the codebase less likely to devolve into incomprehensible spaghetti.
Huge and seemingly often unacknowledged issue these days. And many attempted solutions seem to be adding fuel to the fire (or salt to the wound) by creating more tools (to fix problems with previous tools) ...
Red (red-lang.org) is one different sort of attempt at tackling modern software development complexity. Its like an improved version of REBOL, but aims to be a lot more - like a single language (two actually, RED/System and RED itself) for writing (close to) the full stack of development work, from low level to high level. Early days though, and they might not have anything much for web dev yet (though REBOL had/has plenty of Internet protocol and handling support).
You're asking the wrong question. It shouldn't be "how do we get people to slow down?" It should be, "how do we make rapid software development better?"
Not too long ago (in human-years, not internet-years). Most node packages weren't built with unit testing. Now its quite common in the popular packages.
Website UI is probably the same thing. After all, it took us a really long time till we got the whole HTML5 spec finally stabilised.
So you will probably see the tipping point occur over the next 10 human years, or less.
And just like you I been really frustrated with the inadequacy of UI testing tools, especially with Selenium. So like @donaltroddyn, I set out to develop my own UI testing tool (https://uilicious.com/), to simplify the test flow and experience.
So wait around, you will see new tools, and watch them learn from one another. And if you want to dive right into it, we are currently running close beta.
This also brings me to Traefik, one of the coolest projects I have come across in the last months.
Traefik + DC/OS + CI/CD is what allows developers to create value for the business in hours and not in days or weeks.
Also, we deploy to production at least 4 times a day, the time from commit to deployable to production is about 30 minutes. And because it is a container it will start with a clean, documented setup (Dockerfile) every time. There is no possibility of manual additions, fixes or handholding.
AMI only runs on AWS. Docker runs on anything. I don't think "versatile" is the word you are looking for.
We mainly use DC/OS to run more services on less instances.
From an "I just want to get my app deployed" perspective it may still be best to just use Heroku. But from a "new developments in the field" perspective, the fact that I can rent a few machines and have my own Heroku microcosm for small declining effort is pretty cool.
Transfer Learning (so we need less data to build models) http://ftp.cs.wisc.edu/machine-learning/shavlik-group/torrey...
Generative adversarial networks (so computers can get human like abilities at generating content) https://papers.nips.cc/paper/5423-generative-adversarial-net...
Furthermore, if we consider that most of these DL paper completely ignore the fact that the nets must run for days on a GPU to get decent results, then everything appears way less impressive. But that's just my opinion.
I love working in deep learning, but we still have LOTS of work to do.
The time from theoretical paper to widely deployed app is smaller in DL than in any other field I have experience with.
It's going to take less and less time and money to train a useful model.
Does this only apply to artistic content, or also to engineering content ? Say PCB layouts, architectural plans, mechanical designs, etc ?
To get a better understanding (other than reading a paper) read this excellent blog post
: it is a lot harder to build a NN when there are very constraint rules. But it is also a lot easier to verify and penalize it and generate synthetic data.
Most developed implementation is BayesDB, but there's a lot of ideas coming out of a number of places right now.
e.g. store customer orders in the DB, and query `P(c buys x | c bought y)` in order to make recommendations - where `c buys x` is unknown, but `c bought y` occurred, and we know about 'other cs' x and y.
Is that sort of how it works?
The way I see, the real utility comes from the fact that domain models such as those in a company's data warehouse are typically very complex, and a great deal of care often goes into mapping out that complexity via relational modelling. It's not just that c bought x and y, but also that c has an age, and a gender, and last bought z 50 days ago, and lives in Denver, and so on.
Having easy access to the probability distributions associated with those relational models gives you a lot of leverage to solve real life problems with.
http://empirical.com (still dark atm)
co-founded by CEO Richard Tibbetts, who was also a co-founder of StreamBase (acquired by TIBCO).
I'm curious to know if it's related to the current discussion.
- Meta-tracing, e.g. PyPy.
- End-to-end verification of compilers, e.g. CompCert and CakeML.
- Mainstreamisation of the ideas of ML-like languages, e.g. Scala, Rust, Haskell, and the effect these ideas have on legacy languages, e.g. C++, Java 9, C#.
- Beginning of use of resource types outside pure research, e.g. affine types in Rust and experimental use of session types.
Foundation of mathematics:
- Homotopy type theory.
- Increasing mainstreamisation of interactive theorem provers, e.g. Isabelle/HOL, Coq, Agda.
- Increasing ability to have program logics for most programming language constructs.
- Increasingly usable automatic theorem provers (SAT and SMT solvers) that just about everything in automated program verification 'compiles' down to.
I don't know much about CPUs, but I suspect that one of the core problems of software verification, the absence of a useful specification, isn't much of an issue with hardware.
I'd be really interested in applying any of these techniques to a full TLS implementation.
 K. Bhargavan et al, Implementing TLS with Verified Cryptographic Security. http://research.microsoft.com/en-us/um/people/fournet/papers...
Can you talk more about this? I even got THE book on this (haven't really read it yet though) and like I think I get the rough ideas but I'd be curious to hear what HTT means to you (lol).
In HoTT, there is an extension of inductive types that allows you to, not just have constructors, but also to impose "equalities" so these generalized "quotients" really have first-class status in the language.
As far as "exciting developments" in HoTT, the big one right now is Cubical Type Theory , which is the first implementation of the new ideas of HoTT where Higher inductive types and the univalence axiom "compute" which means that the proof assistant can do more work for you when you use those features.
I just saw a talk about it and from talking to people about it, this means that it won't be too long (< 5 years I predict) before we have this stuff implemented in Agda and/or Coq.
Finally, I just want to say to people that are scared off or annoyed by all of the abstract talk about "homotopies" and "cubes", you have to understand that this is very new research and we don't yet know the best ways to use and explain these concepts. I certainly think that people will be able to use this stuff without having to learn anything about homotopy theory, though the intuition will probably help.
HoTT brought dependent types and interactive theorem proving to the
masses. Before HoTT, the number of researchers working seriously on
dependent type theory was probably < 20. This has now changed, and the field is developing at a much more rapid pace than before.
How much do you know about modern testing, abstract interpretation, SAT/SMT solving? In any case, as of Feb 2017, a lot of this technology is not yet economical for non-safety critical mainstream programming. Peter O'Hearn's talk at the Turing Institute https://www.youtube.com/watch?v=lcVx3g3SmmY might be of interest.
Why isn't it economical yet?
There are some ways in which these tools are not economical. There is currently a big gap. On one side of the gap, you have SMT solvers, which have encoded in them decades of institutional knowledge about generating solutions to formula. An SMT solver is filled with tons of "strategies" and "tactics" which are also known has "heuristics" and "hacks" to everyone else. It applies those heuristics, and a few core algorithms, to formula to automatically come up with a solution. This means that the behavior is somewhat unpredictable, sometimes you can have a formula with 60,000 free variables solved in under half a second, sometimes you can have a formula with 10 that takes an hour.
It sucks when that's in your type system, because then your compilation speeds become variable. Additionally, it's difficult to debug why compiling something would be slow (and by slow, I mean sometimes it will "time out" because otherwise it would run forever) because you have to trace through your programming language's variables into the solvers variables. If a solver can say "no, this definitely isn't safe" most tools are smart enough to pull the reasoning for "definitively not safe" out into a path through the program that the programmer can study.
On the other end of the spectrum are tools like coq and why3. They do very little automatically and require you, the programmer, to specify in painstaking detail why your program is okay. For an example of what I mean by "painstaking" the theorem prover coq could say to you "okay, I know that x = y, and that x and y are natural numbers, but what I don't know is if y = x." You have to tell coq what axiom, from already established axioms, will show that x = y implies y = x.
Surely there's room for some compromise, right? Well, this is an active area of research. I am working on projects that try to strike a balance between these two design points, as are many others, but unlike the GP I don't think there's anything to be that excited about yet.
There's a lot of problems with existing work and applying it to the real world. Tools that reason about C programs in coq have a very mature set of libraries/theorems to reason about memory and integer arithmetic but the libraries they use to turn C code into coq data structures can't deal with C code constructs like "switch." Tools that do verification at the SMT level are frequently totally new languages, with limited/no interoperability with existing libraries, and selling those in the real world is hard.
It's unlikely that any of this will change in the near term because the community of people that care enough about their software reliability is very small and modestly funded. Additionally, making usable developer tools is an orthogonal skill from doing original research, and as a researcher, I both am not rewarded for making usable developer tools, and think that making usable developer tools is much, much harder than doing original research.
It sadly also depends a lot on the solver used and the way the problem was encoded in SMT. For a class in college I once tried to solve Fillomino puzzles using SMT. I programmed two solutions, one used a SAT encoding of Warshall's algorithm and another constructed spanning trees. One some puzzles the first solver required multiple hours whereas the second only needed a few seconds, while on other puzzles it was the complete opposite. My second encoding needed on hours for a puzzle which I could solve by hand in literally a few seconds. SAT and SMT solvers are extremely cool, but way incredibly unpredictable.
It's frustrating because this stuff really works. Making it work probably doesn't have to be hard, but researchers that know both about verification and usability basically don't exist. I blame the CS community's disdain for HCI as a field.
I had heard about Dafny but hadn't seen the tutorial!
> Additionally, making usable developer tools is an orthogonal skill from doing original research, and as a researcher, I both am not rewarded for making usable developer tools, and think that making usable developer tools is much, much harder than doing original research.
When you're saying they're orthogonal, are you effectively saying that researchers generally don't have 'strong programming skills' (as far as actually whacking out code). If so, how feasible would it be for someone who is not a researcher, but a good general software engineer, to work on the developer tools side of things?
I think that this keeps most researchers away from making usable tools. It's hard, they're not rewarded for making software artifacts, they're maybe not as good at it as they are at other things.
I think it's feasible for anyone to work on the developer tools side of things, but I think it's going to be really hard for whoever does it, whatever their background is. There are lots of developer tool development projects that encounter limited traction in the real world because the developers do what make sense for them, and it turns out only 20 other people in the world think like them. The more successful projects I've heard about have a tight coupling between language/tool developers, and active language users. The tool developers come up with ideas, bounce them off the active developers, who then try to use the tools, and give feedback.
This puts the prospective "verification tools developer" in a tight spot, because there are only a few places in the world where that is happening nowadays: Airbus/INRIA, Rockwell Collins, Microsoft Research, NICTA/GD. So if you can get a job in the tools group at one of those places, it seems very feasible! Otherwise, you need to find some community or company that is trying to use verification tools to do something real, and work with them to make their tools better.
Compilers, in particular optimising compilers are notoriously buggy,
see John Regehr's blog. An old dream was to verify them. The great
Robin Milner, who pioneered formal verification (like so much else),
said in his 1972 paper Proving compiler correctness in a mechanized
logic about all the proofs they left out "More than half of the
complete proof has been machine checked, and we anticipte no
difficulty with the reminder". Took a while before X. Leroy filled in
the gaps. I though it would take a lot longer before we would get
something as comprehensive as CakeML, indeed I had predicted this
would only appear around 2025.
It sucks when that's in your type system
making usable developer tools is much,
much harder than doing original research.
In the best case, deciding formulae like A -> B is NP-complete, but
typically much worse. Moreover, program verification of non-trivial
programs seems to trigger the hard cases of those NP-complete (or
worse) problems naturally. Add to that the large size of the involved
formulae (due to large programs), you have a major theoretical problem
at hand, e.g. solve SAT in n^4, or find a really fast approximation
algorithm. That's unlikely to happen any time soon.
We don't even know how effectively to parallelise SAT, or to make SAT
fast on GPUs. Which is embarrassing, given how much of deep
learning's recent successes boil down to gigantic parallel computation
at Google scale. Showing that SAT is intrinsically not parallelisable,
or even just GPUable (should either be true), looks like a difficult
theoretical problem .
as a researcher, I both
am not rewarded for
he community of people that care enough
about their software reliability is very
small and modestly funded.
You can think of program correctness like the speed of light. You can
get arbitrarily close but the closer you get the more energy (cost)
you have to expend. Type-checking and a good test suite already catch
most of the low-handing fruit that the likes of Facebook and Twitter need to worry about . As of 2017, for all but the most safety
critical programs, the cost of dealing with the remaining problems
does is disproportionate in comparison with the benefits. Instagram or
Whatsapp or Gmail are good enough already despite not been formally
Cost/benefit will change only if the cost of formal verification is
brought down, or the legal frameworks (liability laws) are changed so that software
producers have to pay for faulty software (even when it's not an
Airbus A350 autopilot).
I know that some verification companies are lobbying governments for such legislative changes. Whether that's a good think, regulatory capture or something in-between, is an interesting question.
Another dimension of the problem is developer education. Most (>99%) of contemporary programmers lack the necessary
background in logic even to think properly about program
correctness. Just ask the average programmer about loop invariants
and termination order. They won't be able to do this even for 3-line
programs like GCD. This is not a surprise as there is no industry demand for this kind of knowledge, and will probably change with a change in demand.
I do think that making verification tools easier is something that researchers could and should be thinking about. Probably not verification and logic researchers directly, but someone should be carefully thinking about it and systematically exploring how we can look at our programs and decide they do what we want them to do. I have some hope for the DeepSpec project to at least start us down that path.
I also have hope for type-level approaches where the typechecking algorithms are predictable enough to avoid the "Z3 in my type system" problem but expressive enough that you can get useful properties out of them. I think this is also a careful design space and another place that researchers lose because they don't think about usability. They just say "oh, well I'll generate these complicated SMT statements and discharge them with Z3, all it needs to work on are the programs for my paper and if it doesn't work on one of them, I'll find one it does work on and swap it out for that one." Why would you make a less expressive system if usability wasn't one of your goals?
Don't know much about it, but verum.com claims 50% reduction in development costs.
There's been a renaissance of study in placebo effects, meditation, and general frameworks for how people change belief for therapeutic purposes or otherwise, but to me, that's been going on for a long time and is more about acceptance than being a new development.
One of the most exciting developments that's been coming out recently is playing with language to do what's called context-free conversational change.
Essentially, you can help someone solve an issue without actually knowing the details or even generally what they need help with. It's like homomorphic encryption for therapy. A therapist can do work, a client can report results, but the problem itself can be a black box along with a bit of the solution as well since much of the change is unconscious.
It works better with feedback (a conversation) of course, but often can be utilized in a more canned manner if you know the type of problem well enough.
I'm working on putting together an automated solution that's based on some loose grammar rules, NLP, Markov chains, and anything else I can use to help a machine be creative in language to help people solve their own problems, but as a first step as a useful tool for beginner therapists to help them get used to the ideas and frameworks with language to use.
So essentially, I'm getting a good chunk of the way toward hacking on a machine that can reliably work on people's problems without having to train a full AI or anything remotely resembling real intelligence, just mimicking it.
Before you go thinking, "Didn't they do that with Eliza?" Well yes, in a way, but my implementation is using an entirely different approach.
with all due respect, said politely, it is my opinion that you are a charlatan.
I wasn't interested in long citations or garnering proof of my work in particular with training a machine to do this work. I simply wished to add to this thread and did so, in order to show someone out there, maybe even you, what else is going on that is exciting in my little corner of the world.
I'm not that good of a programmer, so it's not in a state that it does work yet. I hope my original comment didn't suggest otherwise, but let me be perfectly clear here: I have no working machine implementation that can do what I want yet. It can work with simple canned responses like Eliza, but it's not enough. I am working on employing all of the techniques and tools mentioned, but progress is slow.
However, this is work and change I employ daily with my clients professionally and I can assure you that it does work.
You don't even have to take my word for it.
Consider....seriously consider: who would you not be if you weren't you?
If you thought about that one for a sec and felt a little spaced out for a second, you did very well.
If you came up with something quickly like "me" and didn't really actually consider the question, allow me to pose another to you. Again, seriously consider this. Read it a few times. Imagine emphasis on different words each time.
Who are you not without that problem you are interested in solving?
This work can be made more difficult by text only and seriously asynchronous communication, which is why I mentioned it being easier within conversation.
If you are interested in more, google "mind bending language" or "attention shifting coaching" and find Igor Ledochowski and John Overdurf. Their work has helped me change the lives of thousands.
> You don't even have to take my word for it.
Honest question: how not?
> who would you not be if you weren't you?
Depending on how you parse the sentence, either "someone else" or "that's just a paradox". Essentially the concept of "me" as an entity is fundamentally flawed.
Playing with the meanings of "me" and "not me" in a subjunctive form doesn't make the question very interesting (as in non-trite), to be honest. I guess the intent is not to be fresh but to be thought-provoking or similar, or setting the listener in a certain mindset? Still, sets my mind in the "meh" state.
> Who are you not without that problem you are interested in solving?
I'm not my problems. I'm also not not-my-problems. Actually I am not (I isn't?). I don't see how this helps with anything, though.
Either way, your questions pose (to me) more philosophical thinking (which I already do, anyways) than mindbending or whatever. Maybe my mind is already bent... and I have to say it didn't go very well ;)
A long time ago I came to the conclusion that these questions are merely shortcomings in how language and cognition works. Metaphysics, ontology (and even epistemology) are just fun puzzles with no solution, which I'm ultimately obliged to answer with "who the f--- cares".
Kant was right.
Not that anything you said is directly contradicted by Kant. In fact I'd say it fits very well within the idea that "human mind creates the structure of human experience". It's just never been really useful to me in any way. I really, really, want to know more of (and even believe in) your changework but, often being presented with vague ideas, no one has ever made a solid case on how it isn't, as GP said, charlatanry.
This is true of basically every post in this thread. If you're interested in learning more, there are more friendly ways of asking.
Re: friendliness -- I believe I expressed the opinion that someone is a charlatan in as friendly of a manner as is possible.
Igor Ledochowski - http://hypnosistrainingacademy.com
John Overdurf - http://JohnOverdurf.com
As far as context-free therapy goes, that's a bit of an advanced subject, but can be learned and mastered through some of their programs.
The key tenets are simple though. As a model, consider that human language builds around 5 concepts: Space, Time, Energy, Matter, and Identity. These 5 also map cleanly to questions (5Ws and H) and language predicates in human language. Space is Where, Time is When, Energy is How, Matter is What, and Identity carries two with Why and Who.
Every problem you've ever had is built up of some combination of the 5 in a specific way, unique to you.
The pattern of all change is this:
1) Associate to a problem, or in other words, bring it to mind.
2) Dissociate from the problem, or basically get enough distance from it so that you can think rationally and calmly. Similar to a monkey not reaching for a banana when a tiger is running after it, your brain does not do change under danger and stress well. It can, but that usually leads to problems in the first place.
3) Associate (think about, experience) a resource state. Another thought or experience that will help with this one, for example if someone were afraid of clowns, I'd ask a question like, "What clowns fear you?" It usually knocks them out of the fear loop for a second.
4) While thinking about the resource, recall the problem and see how it has changed. Notice I said has changed. It always changes. You can never do your problem the same again. Will this solve things on the first go? Maybe. Maybe not, but it's enough to get a foothold and a new direction and loop until it's done.
Which is what makes this fun and exciting to do in person and fun and exciting to help teach a machine to mimic it to.
That's why I made my original comment. Maybe you're not a charlatan, in which case I'd have to conclude you're thinking irrationally and have been deceived by some form of magical thinking.
You have not proposed any mechanism by which these steps can form a consistent treatment for problems that individuals have struggled with for years. You've merely declared that it will, and a whole lot of faith is required.
Other posts in this thread mostly propose a mechanism, even if we readers don't have the prerequisites to fully understand it. For example, consider the proposal that machine learning could be applied to the mundane tasks a radiologist performs. It may or may not pan out, but it has a basis.
Basically, what we do is based on how we see things. If we can change how we see things, then new actions & results become available.
Then the question just becomes, how can we change how we see things.
If how we see something comes from what we've experienced, then introducing a new experience can have us see it differently.
If how we see something comes from what we think about it, then we can introduce a new thought about it.
The point being to change the internal mental model related to the thing, so that we see it differently, we experience it differently, it occurs for us differently than it did before.
In the case above, step 3 introduced a new thought and internal experience related to the thing, and thus the step between 3 and 4 is, "their internal mental model, connected to the thing, changed".
Again, the mechanism (and the missing step) becomes, "change how we see & experience something, change our internal model relating to it". And then, some possibilities for triggering that include having a new thought about it, having a new experience about it; and various techniques can exist for introducing those experiences or thoughts.
At least, that's how I see it (how it occurs for me, how I've experienced it).
Lambdas are lightweight function calls that can be spawned on demand in sub-millisecond time and don't need a server that's constantly running. They can replace most server code in many settings, e.g. when building REST APIs that are backed by cloud services such as Amazon DynamoDB.
I've heard many impressive things about this way of designing your architecture, and it seems to be able to dramatically reduce cost in some cases, sometimes by more than 10 times.
The drawback is that currently there is a lot of vendor lock-in, as Amazon is (to my knowledge) the only cloud service that offers lambda functions with a really tight and well-working integration with their other services (this is important because on their own lambdas are not very useful).
Your input is tightly restricted, and with Amazon in particular, easy to break before you even get to the Lambda code (the Gateway is fragile in silly ways). Your execution schedule is tightly controlled by factors outside your control - such as the "one Lambda execution per Kinesis shard". You can be throttled arbitrarily, and when it just fails to run, you are limited to "contact tech support".
In short, I can't trust that Lambda and its ilk are really designed for my use cases, and so I can only really trust it with things that don't matter.
Auth0 has Web Tasks: https://webtask.io/
Am sure there are many more implementations out there. Agree that vendor lock-in is always a concern.
But the reality is that they don't, with cold-start times upward of 30 seconds. If you use them enough to avoid the cold-start penalties, then you're better of with reserved instances because lambdas are 10x the price. If you can't handle the 30 second penalty then you're better off with reserved instances because they're always on. If you have rare and highly latency-tolerable events, then use lambda.
There is no cutting edge with serverless on AWS.
And within the wider space of blockchains, improving access to strong anonymization techniques appears to be moving forward quickly: https://blog.ethereum.org/2017/01/19/update-integrating-zcas...
The original expectation was to gradually increase the block size to increase capacity as more users joined the network, eventually transitioning most users to "thin" clients that don't store the (eventually enormous) complete blockchain.
The Core devs right now feel that the current situation (every node a full peer with the complete chain, but maxed out capacity and limited throughput) is preferable for a number of reasons including decentralization, while the Unlimited devs feel that it's time to increase the block size in order to increase capacity and get more users on the network, among other things.
Decisions like this are usually decided by the miner network reaching consensus, with votes counted through hashing power/mined blocks. I'm not sure where things stand at the moment, but it's been interesting to observe.
I understand it's become a rather contentious topic in the community.
IIUC, segwit makes certain kinds of complicated transactions easier to handle (ones with lots of inputs/outputs), possibly allowing more transactions to fit in less space, and lays useful groundwork for overlay networks like Lightning. I think the thinking is that overlay networks can be fast, and eventualy reconcile against the slower bitcoin network.
Unlimited would rather just scale up the bitcoin network in place, instead of relying on an overlay network.
You'd probably get better information from bitcoincore.org and bitcoinunlimited.info, or the subreddits /r/bitcoin and /r/btc (for core and unlimited, respectively, they split after moderator shenanigans in /r/bitcoin).
With the brutalist movement something new started. People went back to code editors to create websites by hand skipping third-party, non-web-native user interface design tools prefilled with common knowledge making websites looking uniform.
The idea of design silos and brand-specific design thinking is dropped: no more bootstrap, flat design, material design, etc.
It's like back to the nineties and reinventing web design. You start from scratch, on your own, and build bottom up without external influence and or help.
It's about creativity vs. the bandwagon, about crafting your own instead of putting together from popular pieces.
All a great boon for UX, while being easier to design.
For a brutalist example, let's take HN. It works. Plus, I'm sure the code is quite simple.
The emphasis is on dense content with simple links, and there's not a lot of "live" interactive content on the page. I don't find either site to be particularly ugly or visually offensive, contrary to many of the linked "brutalist" sites.
I'd love to see more sites in the HN/reddit model (here's hoping reddit's coming desktop redesign doesn't lose that), but I wouldn't want to actually use more brutalist sites (outside of individual creative expression, anyway).
TLDR: Fancy fused infrared (LWIR/SWIR) and visible spectrum camera systems may 'soon' be on a passenger airliner near you.
Using infrared cameras to see through fog/haze to land aircraft has been happening for a while now, but only on biz jets or on FedEx aircraft with a waiver. The FAA has gained enough confidence in the systems that they have just opened up the rules to allow these camera systems to be used to land on passenger aircraft.
Combine that with the fact that airports are transitioning away from incandescent lights to LEDs (meaning a purely IR sensor system is not longer enough), and you get multi-sensor image fusion work to do and a whole new market to sell them to.
Here is a blog post (from a competitor of ours) talking about the new rules.
Say with a car that has a heads up display for night vision, if it had an SWIR sensor and IR lights, can that cut through fog too? Or is it the LWIR that is able to do that?
Another fun part is that fog at one airport can be different than fog at another, so while the weather conditions at both locations may say visibility is "Runway Visible Range (RVR) 1000ft", that is for a pilots eyes, and the same camera may work just fine at one location and not at all at the other.
I'm not sure I did justice to instant apps, because there's a language barrier playing in. But here's an example: I use the Amazon app maybe once every 2 weeks, and yet it's one of the apps consuming most amount of memory on my phone due to background services. After Amazon integrates instant apps, I'll be able to delete the app, and just google search for the product through my phone. The Google search will then download the required page as an app, giving me the experience of an app, whilst not even having it on the phone.
Also, to answer your question, not it's not the same as a website because it will be a native Android app with the ability to communicate with the Android OS, like any other Android app.
The possibility of things — in terms of improved UX — that you can accomplish with instant apps are infinite. It all comes down to how you want to use it.
If I'm clicking on a link I want to open it with my browser, not with some app. I find this extremely annoying with facebook and even the news carousel already.
I can't open new tabs, copy the url, switch to other tabs like I would in the normal browser. This is extremely confusing and I don't how this benefits me in any way.
A better implementation would have been to have a popup with a list of compatible apps to run, including an option to run it in a browser like any normal link.
I really hope the NFC bit is opt-in by default. I don't want to have to manually disable it every time I get a new phone. In fact, even if I've opted into having the SF Park app run when I'm near a parking meter, I want the option to "reject" it just like I do when I get an incoming phone call.
The website itself specifies which app should be used by publishing a Digital Asset Links file. (https://developer.android.com/training/app-links/index.html)
Full OS access could mean permissions per page - could be awkward or ok. Much of the app vs. webpage debate here is the same as always - though offline advantage is gome.
There shouldn't be any problem classloading an Activity, use reflection to instantiate, and treat as you would a runtime Activity (as opposed to being declared in AndroidManifest.xml). But I haven't actually done this; could be some gotchas in incorporating runtime a Activity into the GUI.
IIRC google had a few hits on classloading Activities.
Is there some point where websites start to significantly displace apps?
Yes. It happened about ten years ago.
I don't get the appeal, for example, of native apps for things like airlines, amazon, ebay, etc.
It seems to me that this is already slowly happening and this instant app thing is the reaction. After all, Google would lost the control if everybody started to use the browser.
1. D4M: Dynamic Distributed Dimensional Data Model
http://www.mit.edu/~kepner/D4M/ GraphBLAS: http://graphblas.org
Achieving 100M database inserts per second using Apache Accumulo and D4M https://news.ycombinator.com/item?id=13465141
MIT D4M: Signal Processing on Databases [video] https://www.youtube.com/playlist?list=PLUl4u3cNGP62DPmPLrVyY...
2. Topological / Metric Space Model
Fast and Scalable Analysis of Massive Social Graphs
Quantum Processes in Graph Computing - Marko Rodriguez [video] https://www.youtube.com/watch?v=qRoAInXxgtc
3. Propagator Model
Revised Report on the Propagator Model https://groups.csail.mit.edu/mac/users/gjs/propagators/
Constraints and Hallucinations: Filling in the Details - Gerry Sussman [video]
We Really Don't Know How to Compute - Gerry Sussman [video]
Propagators - Edward Kmett - Boston Haskell [video] https://www.youtube.com/watch?v=DyPzPeOPgUE
What are you working on?
No easy solution here, what is the ^Bandcamp of concert venues^?  Is there a venue problem of, "Where do you play?"
 I know the solution is a political one due to land usage, sound restrictions and venue size.
Good all-purpose instrument: https://www.kvraudio.com/product/orion-sound-module-by-sampl...
Good orchestral instruments: http://vis.versilstudios.net/vsco-community.html
A helpful article I wrote with links and basic advice for new musicians: https://blog.rileyreverb.com/how-to-be-a-musician-58511c4e18...
Personally, I love and recommend Ableton Live which features an easy to use interface, workflow, lots of options for experimentation and extensions, great and large community as well. Good choice for beginners and experts alike. Plus, with Ableton Push you have the option to get an excellent hardware controller that is tailored to your DAW, but it isn't something you'd need from the start.
Alternatively, you almost can't beat Logic on price considering its features and performance. I'd say it is more complex, but that's subjective.
Both Logic and Live (Suite version) offer a complete solution, including high quality instruments, synths and effects.
Hardware is optional, but a simple midi keyboard for less than a 100 bucks will help a lot.
For now, Reaper and a few free VSTs will do. I find myself bumping up against the fact that Reaper was made for live music and its devs are understandably keeping the focus there, even though they do good work on the MIDI roll. They always nail down a few irritants in each release.
You'll go further and have an easier time if the community around your tools makes the same kind of music. Choosing tools mostly comes down to what you want to do. If you want to do electronic, Ableton is a good bet.
By the way, if Ableton remains too expensive for your taste, there's always Bitwig, which isn't quite as mature and has a much smaller, but growing community, yet it's very similar to Ableton's approach to music production.
With the right program and a distinctive chemistry to target in the unwanted cell population, this flexible technology has next to no side-effects, and enables rapid development of therapies such as:
1) senescent cell clearance with resorting to chemotherapeutics, something shown to extend life in mice, reduce age-related inflammation, reverse measures of aging in various tissues, and slow the progression of vascular disease.
2) killing cancer cells without chemotherapeutics or immunotherapies.
3) destroying all mature immune cells without chemotherapeutics, an approach that should cure all common forms of autoimmunity (or it would be surprising to find one where it doesn't), and also could be used to reverse a sizable fraction of age-related immune decline, that part of it caused by malfunctioning and incorrectly specialized immune cells.
And so forth. It turns out that low-impact selective cell destruction has a great many profoundly important uses in medicine.
Part of the problem in old people is that they have too much memory in the immune system, especially of pervasive herpesviruses like cytomegalovirus. Those memory cells take up immunological space that should for preference be occupied by aggressive cells capable of action.
Another point: in old people, as a treatment for immunosenescence, immune destruction would probably need to be paired with some form of cell therapy to repopulate the immune system. In young people, not needed, but in the old there is a reduced rate of cell creation - loss of stem cell function, thymic involution, etc. That again, isn't a big challenge at this time, and is something that can already be done.
At present sweeping immune destruction is only used for people with fatal autoimmunities like multiple sclerosis because the clearance via chemotherapy isn't something you'd do if you had any better options - it's pretty unpleasant, and produces lasting harm to some degree. Those people who are now five or more years into sustained remission of the disease have functional immunity and are definitely much better off for the procedure, even with its present downsides, given where they were before. If the condition is rhematoid arthritis, however, it becomes much less of an obvious cost-benefit equation, which is why there needs to be a safe, side-effect free method of cell destruction.
"Our approach is quite different from most other attempts to clear these cells. We have two components to our potential therapy. First, there is a gene sequence consisting of a promoter that is active in the cells we want to kill and a suicide gene that encodes a protein that triggers apoptosis. This gene sequence can be simple, like the one that kills p16-expressing cells, or more complicated, for example, incorporating logic to make it more cell type specific. The second component is a unique liposomal vector that is capable of transporting our gene sequence into virtually any cell in the body. This vector is unique in that it both very efficient, and appears to be very safe even at extremely high doses."
"There's a subtle but profound distinction between our approach and others. The targeting of the cells is done with the gene sequence, not the vector. The liposomal vector doesn't have any preference for the target cells. It delivers the gene sequence to both healthy and targeted cells. We don't target based on surface markers or other external phenotypic features. We like to say "we kill cells based on what they are thinking, not based on surface markers." So if the promoter used in our gene sequence (say, p16) is active in any given cell at the time of treatment, the next part of our gene sequence - the suicide gene - will be transcribed and drive the cell to apoptosis. However, if p16 isn't active in a given cell, then nothing happens, and shortly afterwards the gene sequence we delivered would simply be degraded by the body. This behavior allows our therapy to be highly specific and importantly, transient. Since we don't use a virus to deliver our gene sequence, and our liposomal vector isn't immunogenic, our hope is that we should be able to use it multiple times in the same patient."
.Net Core: Finally, cross platform .Net. Deploying .Net services to Linux is a dream come true. Can't wait for the platform to stabilize.
Windows Server 2016: For "legacy" applications forced to stay on Windows, containers and Docker on Windows is a game changer. One step closer to hopefully making Windows servers somewhat manageable.
Metamaterials: Essentially a material engineered to have a unique property. By precisely controlling a materials structure you can influence how it interacts with electromagnetic waves, sound etc. You can create materials with unique properties such as a negative refractive index over certain wavelengths. It's kind of a novelty but people are building "cloaking devices" using metamaterials i.e. bending electromagnetic waves around a material in certain ways to make it appear invisible to certain frequencies.
Graphene (and other 2D materials): These materials are a relatively recent discovery, graphene was confirmed in 2004 and it has a number of interesting properties. In particular its electrical and thermal properties make it promising for a number of applications. I think it could possibly find applications in batteries, transistors and capacitors. At the moment it is a very expensive material to manufacture which makes it (currently) unsuited for commercial applications. There is a heap of active research involving graphene at the moment.
Google's Deep Mind put out some kind of cool stuff recentely , but I'm mostly just excited for anything that Ilker Yildirim  is doing with Joshua Tanenbaum, because it seems to triangulate more with how humans think about physics. When I was at CogSci 2016, Joshua mentioned combining this with analogical reasoning and that also sounded super cool, even though I'm not sure how to the two fit together.
On the networking side of things, I'm excited about network virtualization and the potential that tools like Docker and Kubernetes give to virtualizing large and complex network topologies.
And as an employee of an IT-heavy enterprise, seeing DevOps becoming a thing makes me happy, even if adoption is slow and expectations are high. It's still better than waiting 6 months to get a couple of VMs to deploy my projects to...
Regenerative medicine: understanding DNA code and restoring cells and organs, making eternal youth possible. It will take decades of hard work.
Ending cancer: We are studding virus mutations, so we could attack them without invasive techniques.
Nuclear fusion: We are simulating plasma physics. This is going to be enormous in ten years or so imho.
We're porting a sizable application to .Net Core so we can be on Linux and save cost and time on instance launch.
I'm writing an in-depth blog post series about the process because I haven't found any significant migration stories. I'm hoping it will help a lot of people through the process.
C# has become such a joy for me to work with. The language itself has been progressing at an impressive pace without adding too much superfluous stuff or making it unwieldy. I feel like it's gotten to the point that most of the pain of strong typing is gone without sacrificing any of the benefits.
I don't have an exact date unfortunately, but it'll be on there sometime before March 2nd to coincide w/ a .NET Rocks podcast. I'll share on HN though and bump this comment when it's released : )
It looks a lot like the PRUs in the Beaglebone black, where you could have normal, non-real time OS (linux) run on the main cores, but delegate real time tasks to the minion cores...and they share memory directly with the main cores.
Basically, you end up with the capabilities of both a Raspberry PI and an Arduino, on a single chip.
Of course, freely-available and well-supported CPU IP can be very cool!
A significant hunk of a Cortex-M die is the ARM licensing fee. If we can drop that? That would be an order of magnitude of savings on my BOM.
Can you give a few numbers/guesetimates for common mcu's ?
I suspect part of the ARM deal is an agreement not to disclose price info.
The SiFive chip is >300MHz on a 180nm process, but the higher end rise-v chips while having competitive perf/clock still have lower clock speeds than the big guys. That is probably just a matter of time though.
In UX an interesting trend is a flood of Software Tools which help during Design, evaluation, Research, etc.
Also adaptive UI which is changed due to user attributes and past behaviour seems to be trendy now (supported by the online marketing field with auto-optimizing Interfaces which optimize for conversion autonomously, etc.)
What you see is based on your previous usage of the app/site and not generalized what everyone is looking at. Pretty common...
- Amazon suggestions, Google results, Facebook stream or even your auto-correct suggestions of your phone keyboard.
Where the actual controls, tools, menus change in favor of your usage behaviour, or desired behaviour of "users like you".
(Its not quite clear if this actually helps or harms the UX because the UI could change without the user understanding why an menu item is not available anymore where it used to be)
- I am blank on real-world example "software" here
- But web/landingpage optimization tools like optimizely use predefined rules to change anything on the UI, (like showing a CTA button or a video, hiding a menu, etc.) where others like dynamicyield move into the direction of AI-automating that test-generation and decision making in favor of a single metric (CTR / Conversion / etc.)
In the end you could argue that every real-world application is only using "adaptive content" and not actual "adaptive UI".
I believe the biggest advancement in the field of education is going to come with VR. With VR, we can dramatically reduce the cost of "learning while doing", which should be the only way of learning. With AI, we can provide highly personalised paths for learners.
VR and AI technologies are finally coming to a point where together they can provide a breakthrough in industries which are mostly untouched since decades.
I think, for middle school, it's easy to underestimate how much of education is not actual content. How do you deliver education that targets the teenage anger / passivity / disappointment / and emotional roller coaster?
I'm 100% with you on this. I've been saying this since VR became mainstream, I'm dying to start something in the e-learning space that takes advantage of VR/Augmented Reality but have no idea where to start.
My house fronts onto a small but fairly active 4-lane regional/suburban highway which I need to cross whenever I get the bus home, and also sometimes when I leave depending on which direction I'm headed. There are complete traffic breaks every 1-5 minutes or so, and it never gets jammed (there are no traffic lights nearby and it's a long stretch of road), so for a highway it's reasonably tame. My main goal is always trying to take advantage of the "near-breaks" that sometimes happen where the road almost completely clears and I can cross if I'm willing to dodge traffic. I especially try to do this when there's a bus approaching the stop across the road!
I've slowly gained confidence and experience over the past 13 years I've lived where I do (I'm 26 now, FWIW), and I now know when, how and why I can safely begin to cross even when cars are on the still road, so I often don't have to wait for complete breaks. That's been a fairly recent development; my progress hasn't been instantaneous.
I'm at the point where I'm trying to improve my ability to break the road down into lanes and actively track the activity in all the lanes simultaneously, so I can properly "leap-frog" across the road even more quickly. I am (perhaps understandably) not very good at this bit at all: I've found that taking (opposing!) traffic motion across multiple lanes and turning that a precise, realtime and confident/low-doubt go/no-go actually requires a fair bit of neurological development. Problem is, road-crossing has no/few common analogues from other life-skills situations that relate to spatial awareness, gross motor coordination, etc, so it's hard to create and iteratively improve this ability.
The main two reasons for this, I think, is that a) road-crossing is potentially life-threatening, so you want to get it right, and b) (important bit) we all seem to be taught to treat crossing roads as almost as dangerous as jumping out of planes - it's something nerve-wracking that must be done as quickly as possible before any damage (which could happen at any moment) is done. I'm guessing this ideology gets rooted in our heads due to our parents' overarching instincts to protect us from harm at all costs, juxtapositioned with the fact that 99.9% of the population does not have a sound understanding of psychology and an idea of the impact of different presentational styles. (In my own case I was simply taught to be extremely careful, but I only had experience with high-traffic roads after 13, and I had a general fear of roads before that point as I didn't need to cross that many, and when I did I was never alone.)
I think that if we can bootstrap ourselves to the point where we can eliminate the FUD and "helpless prey"/deer-in-headlights mentality surrounding crossing roads, we can begin to actually develop mental models that will likely serve us equally well in many different kinds of split-second situations that involve precise timing.
VR would be a way to get to that point: by creating a virtual environment full of various different types of vehicles and environments and simulating those vehicles bearing down on us (using a highly physically accurate 3D engine), we could actually learn through infinite repetition what 60 miles an hour looks like starting half a mile away, or what 20 miles an hour looks like starting a quarter of a mile away, etc etc. And we could slowly get to the point where we can confidently say things like "I know that I'll just make it across this road before that car does if it doesn't change speed" with much greater accuracy than we currently can. Some users may even begin to accurately guess vehicle speed just by watching the vehicle for a few seconds. It would be kind of fun and awesome to make a VR system where kids can be exposed to these kinds of experiences from a young age as an almost standard thing.
Besides a projector system which wouldn't be nearly as realistic, the only alternative to VR I can think of is repeatedly crossing an actual road all day. That would theoretically work, but there are four risk factors: a) is obvious, the fact that each crossing carries discrete risk; b) the fact that exhaustion from running back and forth would raise the stakes of (a); c) the fact that I'd be trying to be adventurous for the sake of learning which would make things worse; and d) the fact that as I gained experience and skill my risk of complacency would go through the roof due to repeated success.
Point (d) is valid for a simulation, too, but could be combated by constantly mixing up the environments - plain road; road with sharp bends; road with car speeding at 60 miles an hour around sharp turn or behind hill; etc - and maybe weird things like only allowing you to end the game when you failed, etc.
The huge controversy with this (there is a catch) is that young minds would latch onto this new kind of information instantly and turn kids into absolute ninjas capable of crossing complex roads routinely leaving just inches to spare. I see the average retiree driver heart attack rate going through the roof, to say the very least.
Because of this, I sadly don't see a school curriculum supporting something like this, and trying to make a company out of it would quite likely fail too because of the constant stream of negative press it would inevitably attract.
All the ingredients are there - you can repeat as much as you want with no cost, there's the element of competition and winning, and there's nothing stopping you from being adventurous and moving at the absolute last minute. Of course kids (full of energy, no idea what to do with it) are going to game that to the hilt to impress their friends. I have doubts that a game engine would be able to competently prevent that - I'm thinking of a "minimum winning crossing distance" metric, but I'm not sure if that would cover everything.
My crazy argument is to let it happen anyway: _let them_ scrape through the levels with inches to spare - because it might mean someone can save a life one day because they have the confidence to know they'll be able to do it in time. I've seen crazy internet videos of things like people dashing onto train tracks to rescue others at the last moment, and I'm not sure if I'd be able to manage that quickly enough because I'm missing precisely the information I describe here. (These are the related concept areas I mentioned at the start of this post.)
I think something like this would likely be best done as an open source project, in a framework where artists and modelers can easily collaborate and feed back art assets for new environments. The whole thing would need to stand on its own to gain traction, I think.
This is definitely not the kind of thing that looks awesome on paper, although I can see it being a lot of fun to work on, and something where you know you'd be teaching some really cool and liberating skills.
FWIW, I have absolutely no hope of getting my hands on any VR hardware anytime soon - due to circumstances entirely outside my control I've been stuck on hand-me-down PCs that average 10 years old for the past 2 decades - so I just thought I'd share it in case you (or anyone else) wants to play with it.
To clarify, the centerpoint of what I was describing above was that VR would provide the ability to repeatedly watch a car approach from a distance and learn what speed it was going at at the same time. If I had that I could do a lot of things.
Perhaps it will be detectable, with technical effort, for some time to come, but as a propaganda and government corruption tool it will complete the circle started by the "telescreen/ankle bracelets" we all carry in our pockets.
The allure is there - be convincingly you, but also look however you want to look. That would get eaten up by a lot of people.
2) More understanding of the "bio psycho social" model of mental illness, with better coordination across different agencies to prevent suicide.
So sheer quantity of believers doesn't work for making a point.
They're basically saying that sheer quantity of believers in anything proves that their God exists!
Coming from a Christian perspective however, I would agree in general people have evidence to believe in God. I don't intend the quote from the Bible below to serve as an sort of evidence. This would not be a logical line of reasoning for someone who does not believe in the truth of the Bible. However, it may serve to further clarify my position.
"For since the creation of the world God’s invisible qualities—his eternal power and divine nature—have been clearly seen, being understood from what has been made, so that people are without excuse." -- Romans 1:20
It definitely doesn't provide any proof, but I didn't say that. I said "sheer quantity of believers doesn't work for making a point", as in, how can the number of people in any way support the validity of any beliefs held by those people?
There's plenty of concrete historical evidence that beliefs held by many many people turned out to be wrong. It's basically the story of science. I.e. we have clear evidence that the fact that a lot of people hold a belief does not mean anything about the truth of that belief.
> In any case, at least Christianity and Islam each have over one billion believers globally.
But that illustrates my original point.
I find it difficult to deny the achievement of tangible progress that is implied by, for example, the self-driving car.
Most of the time it feels like the people who are somewhat successful in those branches simply got lucky by randomly mixing elements A, B and C in an unexpected manner and boom -- magic.
In other words, things progress painfully slow and almost always it's due to intuitive shower/sleep revelations than anything else.
I built a program that played tic-tac-toe in -94, using a combination of traversing a problem tree while evaluating the positions using a neural network.
To my understanding, this is basically the same approach used to develop a Go player, only that it took a month training a minimal network to do something useful at all at the time on my small Amiga 500...
You could read about a fundamental difference between a properly controlled, replicable scientific experiment and computer simulation according to some abstract/unverified model and why results of such simulations cannot be substituted for experimental results or any form of evidence in my older comments.
It sounds kind of like you think that all attempts to codify human knowledge are bunk!
In programming in companies: realization that internal customers not having choice of internal IT providers hurts IT because it reduces IT's need to deliver valuable solutions effectively.
In leadership: management structure is a framework to enforce standardization and generally doesn't adapt well to change, even with the latest management silver bullets (lean, Agile, flat-orgs, etc)
Also in leadership: profound changes are occuring in society and geographies no longer define cultures.
In commercial writing: it's still early, and this takes time, but the concept of the "book" and how it's created is changing. Technologies that allow writers, editors, and beta readers to work on the manuscript simultaneously are increasing the velocity of change.
In art in general: someone else here mentioned music creation and payment is enabling entrants to sustain themselves in niche markets. This is happening in nearly all art forms, not just music. As electronic transfer fidelity increases, more art can be digitized, monetized. Look for more politicized, more global-reach art.
All these things stem from a greater understanding of the world and of human beings, starting with ourselves. It's important to realize each human being is a highly complex system and that generalizations about groups of humans are increasingly being challenged as scientifically unsound.
In particular, wireless transmitters for roomscale are really exciting - seriously, I cannot wait to get rid of the wire-to-head era - as is roomscale for mobile devices.
The Vive getting additional trackers is also super-cool, as that will enable some much better forms of locomotion through foot-tracking. It'll take a little while to take off but I expect the Lighthouse tracking ecosystem to produce all kinds of cool things.
(Not all in VR, either. Drones plus Lighthouse, for example...)
I helped with it for a little while, but the main developer was resistant to:
* Using a package manager or bundling dependencies into a compressed form. Dependencies had to be in the same git repo, fully extracted. (A bit of a "code smell")
* Dependencies could take months to get security updates.
* Documentation couldn't be in the git repo.
* Python 3 was "not an option"
* The main developer has limited experience with the torrent protocol.
It is an interesting project: but it is not a private or secure one.
So I can highly recommend the field of remote sensing as there are many interesting problems to solve.
The same thing is also done on an international level, e.g. the European Union provides platforms and also the environmental agencies in the US.
Clarification: These footprints are not satellite-derived (that is the goal, but it doesn't work well enough for many applications, but we probably will get there ...), but are hand-crafted by people working in city planning. The point is you can use the data as training data.
Flexible solar panels, LED lighting with open source drivers, and the new generation of DC refrigerators are all incredibly exciting and are allowing us to experiment with living without grid electricity.
Building a simple static website and instagram. We'll share pics with HN soon.
We'll also have the bus at PyCon in Portland.
It is much more efficient than DC fridges of even just a few years ago. It has configurable settings to respond to battery levels and can be configured via wifi.
Also, I'm really looking forward to the ActivityPub  implementation, that'll do a lot of interesting things for decentralized web.
Overall, I'm most excited about VR/AR/MR in relation to storytelling and education and how the two can be combined. Houdini and Houdini Engine for UE4 are definitely are worth a considering as part of your VR/AR development stack.
For example http://www.aeromobil.com/ or http://lilium-aviation.com/
I'm personally are rather disappointed that we still don't have a moon colony. Making that happen is unfortunately not part of my field.
I really don't want people doing that over my apartment or favorite urban trail.
Edit: there is a lot to be excited about these days
Analysis has always been an area that the tech community has lacked, ever since it was overdone back in the days of structured programming. It's really cool to bring back a bit of structured analysis as just another tool in the DevOps pipeline and join up the information with all the folks that need it.
Finding this tricky to parse, got a link or repo?
The general idea is to be able to have informal, unstructured business conversations, take those conversations and type extremely brief, semi-structured (tagged) notes, and have those notes "compile" out to various places throughout the organization where they might be needed. One way to think of it is Requirements/Use Cases/User Stories without the rigor. (Or rather without the rigor and onerous BS folks constantly seem to be always adding around them)
Here's the repo. There's also a PDF with details of the tagging language I can send if you're interested. Ping me.
as long as you have a Nvidia GPU
Long story short, so many processes I work with are done completely manually, which is a colossal waste of time. When I started, the person who previously did my job had about 7 main processes they completed monthly, which took about 60 hours to complete. Those 7 processes take my about 10 hours to do after I built automated workbooks
The sad thing is that these excel capabilities have been around forever, but no one understands them.
Lots of cool stuff in the space like Kubernetes, Swarm, CoreOS, rkt!!
In my view, there are still a huge room of applications where wireless and sensor combined, and we already have web/native platforms. This is so exciting development!
Could you please elaborate on this. I don't understand how the camera industry could go beyond image sensors. They wouldn't be the camera industry anymore if they did that.
Can you please expand on this ?
There are some nice protocols and topologies in Wireless Sensor Network topic. While devices' sensors collecting environment data they can communicate with each other in several ways (e.g. ad-hoc, hierarchy) and command each one to behave in different ways (e.g. quadcopters maneuver in whatever beneficial pattern)
Some more ideas on that flying object example, it could calculate overall battery usage and balance it all over cluster by wireless charging on the fly.
Underwater devices or robots would be more interesting.
Another hot topic are organoid bodies and organs-on-a-chip. These are experimental systems where stem cells are turned to grow into structures similar to embryos or organs that allow the study of development and facilitate drug testing etc.
Thirdly, advances in sequencing made it possible to study what kind of bacteria live symbiotically within and on us. The composition of this so called microbiome seems to widely affect body and mind.
Finally, in my personal field, the simulation of how "simple" cells build complex structures and solve difficult tasks, the most exciting development is GPGPU :-)
In summary, the use of machine learning can help us develop better representations of chemical reactions, catalyst behavior, and we can now use adaptive learning to create closed-loop systems to identify, carry out, and optimize chemical processes to reduce environmental impact, reduce energy usage, and decrease costs.
The state of the art isn't quite there, but I see no major conceptual barriers left -- just a matter of implementing it.
For some details, see:
This is coupled with a move away from cookies.
Because I am really sick of Google serving me ads for stuff I recently searched for and subsequently bought.
Basically, it pits two networks in a "duel" and one of them is a generator network that learns to make images.
This will change real estate websites as well. I can just query for houses with X visual features
A fast compiled ruby-like programming language.
I've heard it consumes a lot of memory which may be a problem though (this laptop only has 2GB RAM).
No one needs Babel to write stellar code IMHO. Unfortunately it is not about the quality of the code you write, it is about being politically correct. This whole ES6/ES7 thing is much based on what Coffeescript, Livescript, etc.. already did much better more than 5 years ago. And I dare to guess that most of the Babel proponents don't even realise it's just a transpiler that they will need till the end of the projects live.
note: I expect serious down votes as opposing Babel is almost a serious crime nowadays and proves my unlimited stupidity.
No, web development is not really exiting nowadays, it is more terrifying, where the hype goes tomorrow? Maybe soon I will be forced to write in MS Typescript if I want to be taken seriously. Same counts for Redux because Flux is so 2014.. you must be very brave not using Redux! I can go on and on, way too many examples..
Finding a web developer job now is particularly about complying to made up standards that become more complex every day. And I've seen quit some horrible code bases that perfectly comply! It's a very sad reality.
1. You can write great applications without the latest language features
2. The latest language features do make development easier
Babel is necessary for #2 if you don't have control over the browser which your users use to access your site. If you don't want to transpile, don't. It's as simple as that. However, the future of JS is the future of web development, that is indisputable. Using Babel allows you to stay closer to the future and/or use these great new language features.
You also brought up TypeScript.
3. Types make development much easier
TypeScript is a combination of types and a transpiler for the ability to use the latest ES features. Types are great, providing:
- Better self-documenting code
- More safety
- IDE interop to provide completion, as seen in VS Code
> note: I expect serious down votes as opposing Babel is almost a serious crime nowadays and proves my unlimited stupidity.
From the HN Guidelines: "Please don't bait other users by inviting them to downvote you or proclaim that you expect to get downvoted."
> Maybe soon I will be forced to write in MS Typescript if I want to be taken seriously.
Many would say that someone should be forced to write in a typed language in general in order to be taken seriously.
> If you don't want to transpile, don't. It's as simple as that.
Are you kidding? Please tell me your estimation of how many developers write in ES6/ES7 without using Babel or other transpiler???
You don't really need to tell me what Types are about, I have a long standing C/C++ background. And I really don't need Typescript. I use dynamic type checking based on ES3, does the job flawlessly already for years. Very rare for me to have a type related bug. I'm always wary of people that preach Typescript; what code do they write to get in so much trouble with types?
> Many would say that someone should be forced to write in a typed language in general in order to be taken seriously.
omg.. 'forced', this is bad.
I'm only looking forward to webassembly, that will be the real game changer and the end of JS as we know it.
> Are you kidding? Please tell me your estimation of how many developers write in ES6/ES7 without using Babel or other transpiler???
I was implying that you simply don't use ES6.
Rather than focussing solely on what's wrong, it's very important to also include options you think are better, and why. Looking down your nose at others, describing them as js hipsters using politically correct tools does no one any good, and makes it even less likely people will listen to what you have to say. And on HN, expressing the expectation of down votes is a guaranteed method of receiving them; it's against the guidelines and adds nothing to your comment.
There are a lot of choices in the web development space today. The desire to standardize on something (such as the push for Babel and Webpack) is laudable in that they recognize that so much choice is not necessarily good: it makes it more difficult to decide what to use (sometimes good enough is just that), and splits resources that may otherwise be used to improve a more limited number of options. That's not to say Babel and Webpack are the best options: just that I understand the motivation for standardization and push to popularize a few (rather than all) options.
This breaks the HN guidelines (https://news.ycombinator.com/newsguidelines.html). Please don't.
Sorry that all of these optional features and optional addons ruined your day.
There have been a few quickstarters which claim to reduce the amount of sleep you need but they've all turned out to be nonsense AFAIK.
I agree it's weird that we have nothing when we spend 1/3rd of our lives asleep.
cross platform, open source, very fast
Their current scale-up of instruments I think means that they're looking to aggressively push into diagnostic applications.
The lack of competition is unfortunate however.