HackerNews is very developer-focused. If you guys saw what a radiologist does on a 9-5 basis you'd be amazed it hasn't already been automated. Sitting behind a computer, looking at images and writing a note takes up 90% of a radiologist's time. There are innumerable tools to help radiologists read more images in less time: Dictation software, pre-filled templates, IDE-like editors with hotkeys for navigating reports, etc. There are even programs that automate the order in which images are presented so a radiologist can read high-complexity cases early, and burn through low-complexity ones later on.
What's even more striking is that the field of radiology is standardized, in stark contrast to the EMR world. All images are stored on PACS which communicate using DICOM and HL7. The challenges to full-automation are gaining access to data, training effective models, and, most importantly, driving user adoption. If case volumes continue to rise, radiologists will be more than happy to automate additional steps of their workflow.
Edit: A lot of push back from radiologist is in regards to the feasibility of automated reads, as these have been preached for years with few coming to fruition. I like to point out that the deep learning renaissance in computer vision started in 2012 with AlexNet; this stuff is very new, more effective, and quite different than previous models.
Curious to know what sort of methods you used then if you don't mind sharing.
The best results were with backpropagation neural networks: http://www.sciencedirect.com/science/article/pii/0888613X949...
But we also used fuzzy logic neural networks with genetic algorithms: http://ieeexplore.ieee.org/document/712156/?reload=true
It were 24 years ago.
There might be some interesting things that can be learned from this kind of info and applied to the current status quo (I'm definitely not arguing that there is a sociopolitical element).
Maybe if MRI scans get cheap enough (due to advances in cheap superconductors or whatever) that it's economically feasible to scan people regularly as a precautionary measure (rather than in response to some symptom), then the bulk of the cost might then be in having the radiologist look at the scans. In those "there's nothing wrong but lets check anyways" cases, it might be better to just have the AI do it all even if its accuracy is lower, if it represents a better health-care-dollar-spent to probability-of-detecting-a-serious-problem ratio. (If the alternative is to just not do the scan because the radiologist's fees are too expensive, then it's better to have the cheap scan than nothing at all.)
I can see an argument that if the company was sued then it could try to push the blame onto the software vendor, but surely that would be decided based on the contract between company and software vendor, which is usually defined by the software license.
Machine learning is already used in Radiology. Chances are eventually Radiology will be the domain of machines. But it's going to take some time to get there. Healthcare is extremely regulated and closed minded.
Most of the people in the thread you listed above are clearly biased towards medicine and against computer science and machine learning. But machine learning has been having success in diagnostic medicine even well before the deep learning boom that thread talks about.
- Some RETS recorded requests/responses: https://github.com/estately/rets/blob/master/test/vcr_casset...
(Basically some XML-based (SOAP?), cookie + authorization, that seems very ASP.net / Windows Server centric)
- DICOM ("It includes a file format definition and a network communications protocol"): https://en.wikipedia.org/wiki/DICOM
It's basically how to imaging devices communicate and store the images.
Image examples: http://www.osirix-viewer.com/resources/dicom-image-library/
A video of what a doctor would see: https://www.youtube.com/watch?v=Prb5lcR8Jqw
TCP-based protocol in Wireshark: https://wiki.wireshark.org/Protocols/dicom
I wrote a little .dcm to .jpg based on ruby-dicom BTW: https://gist.github.com/Dorian/9e3eb5891b49926c15a05c641ffef...
- PACS seems just like a database model basically http://mehmetsen80.github.io/EasyPACS/
It's the server that is gonna give the info to the doctors.
It's interesting how there seems to be only one popular viewer: OsiriX
Main issue is with HL7 is not technical. From a business point of view, the incentive to cooperate with other systems via HL7 means another reason for a department to adopt a system other than yours.
These are examples of next-generation radiology companies. The current generation of products are focused on image storage and display. These new companies offer automated image analysis before the radiologist even looks at the image. iSchemaView does hemorrhage maps as soon as new head CT or head MRI is acquired.
It looks like everybody sitting on their data is hindering progress. Is there anything that can be done about that politically? I mean, in many cases the data belongs to the public anyway, unless people signed a waiver, but what is the legality of that?
I'm sure machines will someday take over radiology but there will be many, many jobs automated before it (i.e. decades).
There are three areas that take a lot of time that radiologist would like to see automated:
1. Counting lung nodules.
2. Working mammography CAD.
3. Automated bone-age determination.
Those are the hot three topics for machine learning. Personally, I think that a normal vs. non-normal classifier for CXRs would be more interesting because you could have a completely generated note for normal reads, and radiologists could just quickly look at the image without writing/dictating anything. Of note, hospitals and radiology departments typically lose money on X-ray reads because the reimbursement is $7-$20 (compared to $100+ for MR/CT). So if you could halve the read time, they might become profitable again.
Edit: In terms of 10x, what you'd want is a system that would automatically make the reads (i.e. radiologist report), and a very efficient way for radiologist to verify what is written. It's hard to make a pathologic read, but since roughly 50% of reads are normal, you could start with normal reports.
And then bringing checklist driven analysis for radiologist.
So I decided to observe an radiologist at an hospital for a day (back in 2011), and I noticed that most if not all of it was already automated.
Radiologists were there to rubber-stamp the machine's work (and to ensure compliance, laws and what not).
As an example, one survey (https://ashleynolan.co.uk/blog/frontend-tooling-survey-2016-...) put the number of developers who don't use any test tools at almost 50%. In the same survey about 80% of people stated their level of JS knowledge was Intermediate, Advanced or Expert.
We're currently working on a way to help devs test web app functionality and complete user journeys without having to actually write tests in Selenium or whatever. The idea to is let devs write down what they want to test in English ("load the page", "search for a flight", "fill the form", "pay with a credit card", etc), then we'll use NLP to discern intent, and we have ML-trained models to actually execute the test a browser.
You can give us arbitrary assertions, but we also have built-in tests for the page actually loading, the advertising tags you use, page performance, some security stuff (insecure content, malware links). At the end we hand you back your test results, along with video and perf stats. It’s massively faster than writing Selenium, and our tests won’t break every time an XPATH or ID changes.
>tries to do so by using an imprecise, context-dependent language designed for person-to-person communication to instruct a machine
Selenium is its own can of worms, but it absolutely sounds like you're using the wrong tool for the job here. The problem stopping people from writing browser-based tests is not that people can't understand specific syntaxes or DSLs, it's actually the opposite: people don't have a good, reliable tool to implement browser-based testing in a predictable and specific way that does what a user would intuitively expect.
Whatever the right answers to a next-gen Selenium are, attempting to guess the user's meaning based on Real English by something that is itself an imperfect developing technology like NLP is pretty obviously not the correct toolkit to provide that. Remember, a huge amount of the frustration on Selenium comes from not having the utilities needed to specify your intention once and for all -- the ambiguities of plain English will not help.
If your thing works, it will have to end up as a keyword based DSL like SQL. SQL is usually not so scary to newcomers because a simple statement is pretty accessible, not having any weird symbols or confusing boilerplate, but SQL has a rigid structure and it's parsed in conventional, non-ambiguous ways. "BrowserTestQL" (BTQL) would need to be similar, like "FILL FORM my_form WITH SAMPLE VALUES FROM visa_card_network;"
The biggest piece that's missing in Selenium is probably a new, consistent element hashing selector format; each element on the page should have a machine-generated selector assigned under the covers and that selector should never change for as long as the human is likely to consider it the "same element". The human should then use those identifiers to specify the elements targeted. I don't know how to do that.
The second biggest piece that's missing from Selenium is a consistent, stable WebDriver platform that almost never errors out mid-script; this may involve in some type of compile-time checking against the page's structure or something (which I know is hard/possibly impossible because of JS and everything else).
And whether or not it gets that data is a unit test in another place.
Testability isn't the domain of the view layer.
Abstracting the DOM into a declarative DOM is great for performance, but doesn't lead to necessarily more testable code.
- Test recorders that aren't a great experience and output incomprehensible, brittle tests.
- Test composers that I can best describe as 90's SQL query builders for Selenium.
Complex JS apps are still a challenge for us (especially with some of the WTF code we come across in the wild), but we have a strategy in the works for them. We're still pre-release though. If you're interested, send me an email (firstname.lastname@example.org) and I'll add you to our alpha list.
That is very often the case. It needs to change. Testing is a part of software development, and anyone who writes software should be aware of it. I feel the same way about documentation. And requirements. You can't write good software without knowledge of the processes that surround development. It isn't enough just to be able to write great code.
If you spend more time writing / running tests that you would fixing the bugs they find, you may be doing it wrong. If you're writing documentation no one will read, you may be doing it wrong.
They clearly do have a place though. As for maintaining a set of requirements... I appreciate there must be some environments where what is required is well understood and relatively stable. I'm not quite sure if I should look forward to working in such a place or not!
Actually there isn't. Every project, no matter how it's managed, changes as it goes on. It has to because you learn and discover things along the way. That's why maintaining and understanding project requirements and how they've changed is incredibly important. If you don't keep on top of them then you end up with a project that wanders all over the place and never finishes. Or you build something that misses out important features. Or the project costs far too much. Requirements are not tasks, or epics, or things you're working right not. They're the goals that the tasks and epics work towards.
(My first startup was a requirements management app.)
How did that work out? In the 90's it seemed every industry was switching to Documentum for that sort of thing.
Why should those 2 activities be compared? They do not compare: writing/running tests is about discovering the bug, not fixing it. You still need to fix it after you have done your testing activity.
The time spent writing/running tests should better be compared to the time spent in bug discovery without tests, i.e. how much you value the fact that your users are going to undergo bugs, what the consequences of the users hitting bugs are, what the process to report them is, etc.
I don't think thats the definition of a junior developer. Test tools are apart of building software, you should be hiring devs that have created projects that use tests of some sort, if not with the technology you're using.
>I expect that junior developer in software field should be able to program only.
I don't know how you can have little to no dev experience and know how to program.
Developer need to know development cycle, automated testing, continuous integration, software life cycle, ticketing systems, source control systems, branching and merging, cooperating, etc.
Programmer need to know programming languages, patterns, algorithms, computer internals, effectiveness, profiling, debugging, etc.
Junior developer (in software filed)) has no or little experience in development, so junior developer is almost equal to a programmer, which causes lot of confusion.
If by 'testing' you really mean 'unit testing', as I suspect most junior engineers who claim testing experience do, then hope is already lost. The one saving grace is that there is enough churn in webdev that nothing lasts long enough to reveal how fragile it is.
That said, I'll be thrilled if React Native gives rise to higher quality apps in situations where a native app is unavoidable (e.g. my bank's app).
Five years ago native apps made a level of UX possible that was unheard of on the web, to say nothing of mobile. But today not only has HTML/js closed the gap, but whiz-bang native animations aren't impressive just on account of being novel anymore.
I think these are the future. Once they catch on with mainstream consumers, native apps won't stand a chance against the convenience of simply visiting a website to install/use. Plus, on the developer end, we finally have a true "write once, run anywhere" situation that doesn't involve any complex toolchains or hacky wrappers.
But do you think that Apple would embrace this technology, given that e.g. their app-store is generating lots of revenue?
99% of web apps need the same features but most of this is still up to manually rolling your own.
I should be able to clone some repo, enter some DO/AWS/GOOG keys and push.
This makes the complexity problem much easier to solve, as the code is (should be) less likely to cause an unanticipated mutated state which can't be easily tested for.
I suspect the reality is a small subset of programmers think functional programming is amazing and everyone else hates it. You might think it reduces complexity, but a lot of people feel it reduces comprehensibility.
Yep. It's not that I hate it, I just don't like it. The thing is that the functional-praisers are much more vocal about how they love it whereas people who write imperative do not care much about Haskell.
We are happy with LINQ and that's all 99% of us wants/needs.
Because nothing outside of the function can be changed, and dependencies are always provided as function arguments, the resulting code is extremely predictable and easy to test, and in some cases your program can be mathematically proven correct (albeit with a lot of extra work). Dependency injection, mocks, etc are trivial to implement since they are passed directly to the function, instead of requiring long and convoluted "helper" classes to change the environment to test a function with many side effects and global dependencies. This can lead to functions with an excessively long list of parameters, but it's still a net win in my opinion (this can also be mitigated by Currying).
A side-effect (hah) of this ruleset is that your code will tend to have many small, simple, and easy to test methods with a single responsibility; contrast this with long and monolithic methods with many responsibilities, lots of unpredictable side effects that change the behavior of the function depending on the state of the program in its entirety, and which span dozens or hundreds of lines. Which would you rather debug and write tests for? Tangentially, this is why I hate Wordpress; the entire codebase is structured around the use of side-effects and global variables that are impossible to predict ahead of runtime.
There is much, much more to functional programming (see Monads and Combinators), but if you don't take away anything else, at least try to enforce the no-side-effects rule. A function without side-effects is deterministic; i.e. it will always give you the same output for any given set of inputs (idempotence comes for free). Because everything is a function, functions are first-class citizens, and there are only a few simple data structures, it becomes easy to chain transformations and combine them by applying some of the arguments ahead of time. Generally you will end up with many generalized functions which can be composed to do anything you require without writing a new function for a specific task, thus keeping your codebase small and efficient. It's possible to write ugly functional code, and it's possible to write beautiful and efficient object-oriented code, but the stricter rules of functional style theoretically make the codebase less likely to devolve into incomprehensible spaghetti.
Huge and seemingly often unacknowledged issue these days. And many attempted solutions seem to be adding fuel to the fire (or salt to the wound) by creating more tools (to fix problems with previous tools) ...
Red (red-lang.org) is one different sort of attempt at tackling modern software development complexity. Its like an improved version of REBOL, but aims to be a lot more - like a single language (two actually, RED/System and RED itself) for writing (close to) the full stack of development work, from low level to high level. Early days though, and they might not have anything much for web dev yet (though REBOL had/has plenty of Internet protocol and handling support).
You're asking the wrong question. It shouldn't be "how do we get people to slow down?" It should be, "how do we make rapid software development better?"
Not too long ago (in human-years, not internet-years). Most node packages weren't built with unit testing. Now its quite common in the popular packages.
Website UI is probably the same thing. After all, it took us a really long time till we got the whole HTML5 spec finally stabilised.
So you will probably see the tipping point occur over the next 10 human years, or less.
And just like you I been really frustrated with the inadequacy of UI testing tools, especially with Selenium. So like @donaltroddyn, I set out to develop my own UI testing tool (https://uilicious.com/), to simplify the test flow and experience.
So wait around, you will see new tools, and watch them learn from one another. And if you want to dive right into it, we are currently running close beta.
This also brings me to Traefik, one of the coolest projects I have come across in the last months.
Traefik + DC/OS + CI/CD is what allows developers to create value for the business in hours and not in days or weeks.
Also, we deploy to production at least 4 times a day, the time from commit to deployable to production is about 30 minutes. And because it is a container it will start with a clean, documented setup (Dockerfile) every time. There is no possibility of manual additions, fixes or handholding.
AMI only runs on AWS. Docker runs on anything. I don't think "versatile" is the word you are looking for.
We mainly use DC/OS to run more services on less instances.
From an "I just want to get my app deployed" perspective it may still be best to just use Heroku. But from a "new developments in the field" perspective, the fact that I can rent a few machines and have my own Heroku microcosm for small declining effort is pretty cool.
Transfer Learning (so we need less data to build models) http://ftp.cs.wisc.edu/machine-learning/shavlik-group/torrey...
Generative adversarial networks (so computers can get human like abilities at generating content) https://papers.nips.cc/paper/5423-generative-adversarial-net...
Furthermore, if we consider that most of these DL paper completely ignore the fact that the nets must run for days on a GPU to get decent results, then everything appears way less impressive. But that's just my opinion.
I love working in deep learning, but we still have LOTS of work to do.
The time from theoretical paper to widely deployed app is smaller in DL than in any other field I have experience with.
It's going to take less and less time and money to train a useful model.
Does this only apply to artistic content, or also to engineering content ? Say PCB layouts, architectural plans, mechanical designs, etc ?
To get a better understanding (other than reading a paper) read this excellent blog post
: it is a lot harder to build a NN when there are very constraint rules. But it is also a lot easier to verify and penalize it and generate synthetic data.
Most developed implementation is BayesDB, but there's a lot of ideas coming out of a number of places right now.
e.g. store customer orders in the DB, and query `P(c buys x | c bought y)` in order to make recommendations - where `c buys x` is unknown, but `c bought y` occurred, and we know about 'other cs' x and y.
Is that sort of how it works?
The way I see, the real utility comes from the fact that domain models such as those in a company's data warehouse are typically very complex, and a great deal of care often goes into mapping out that complexity via relational modelling. It's not just that c bought x and y, but also that c has an age, and a gender, and last bought z 50 days ago, and lives in Denver, and so on.
Having easy access to the probability distributions associated with those relational models gives you a lot of leverage to solve real life problems with.
http://empirical.com (still dark atm)
co-founded by CEO Richard Tibbetts, who was also a co-founder of StreamBase (acquired by TIBCO).
I'm curious to know if it's related to the current discussion.
- Meta-tracing, e.g. PyPy.
- End-to-end verification of compilers, e.g. CompCert and CakeML.
- Mainstreamisation of the ideas of ML-like languages, e.g. Scala, Rust, Haskell, and the effect these ideas have on legacy languages, e.g. C++, Java 9, C#.
- Beginning of use of resource types outside pure research, e.g. affine types in Rust and experimental use of session types.
Foundation of mathematics:
- Homotopy type theory.
- Increasing mainstreamisation of interactive theorem provers, e.g. Isabelle/HOL, Coq, Agda.
- Increasing ability to have program logics for most programming language constructs.
- Increasingly usable automatic theorem provers (SAT and SMT solvers) that just about everything in automated program verification 'compiles' down to.
I don't know much about CPUs, but I suspect that one of the core problems of software verification, the absence of a useful specification, isn't much of an issue with hardware.
I'd be really interested in applying any of these techniques to a full TLS implementation.
 K. Bhargavan et al, Implementing TLS with Verified Cryptographic Security. http://research.microsoft.com/en-us/um/people/fournet/papers...
Can you talk more about this? I even got THE book on this (haven't really read it yet though) and like I think I get the rough ideas but I'd be curious to hear what HTT means to you (lol).
In HoTT, there is an extension of inductive types that allows you to, not just have constructors, but also to impose "equalities" so these generalized "quotients" really have first-class status in the language.
As far as "exciting developments" in HoTT, the big one right now is Cubical Type Theory , which is the first implementation of the new ideas of HoTT where Higher inductive types and the univalence axiom "compute" which means that the proof assistant can do more work for you when you use those features.
I just saw a talk about it and from talking to people about it, this means that it won't be too long (< 5 years I predict) before we have this stuff implemented in Agda and/or Coq.
Finally, I just want to say to people that are scared off or annoyed by all of the abstract talk about "homotopies" and "cubes", you have to understand that this is very new research and we don't yet know the best ways to use and explain these concepts. I certainly think that people will be able to use this stuff without having to learn anything about homotopy theory, though the intuition will probably help.
HoTT brought dependent types and interactive theorem proving to the
masses. Before HoTT, the number of researchers working seriously on
dependent type theory was probably < 20. This has now changed, and the field is developing at a much more rapid pace than before.
How much do you know about modern testing, abstract interpretation, SAT/SMT solving? In any case, as of Feb 2017, a lot of this technology is not yet economical for non-safety critical mainstream programming. Peter O'Hearn's talk at the Turing Institute https://www.youtube.com/watch?v=lcVx3g3SmmY might be of interest.
Why isn't it economical yet?
There are some ways in which these tools are not economical. There is currently a big gap. On one side of the gap, you have SMT solvers, which have encoded in them decades of institutional knowledge about generating solutions to formula. An SMT solver is filled with tons of "strategies" and "tactics" which are also known has "heuristics" and "hacks" to everyone else. It applies those heuristics, and a few core algorithms, to formula to automatically come up with a solution. This means that the behavior is somewhat unpredictable, sometimes you can have a formula with 60,000 free variables solved in under half a second, sometimes you can have a formula with 10 that takes an hour.
It sucks when that's in your type system, because then your compilation speeds become variable. Additionally, it's difficult to debug why compiling something would be slow (and by slow, I mean sometimes it will "time out" because otherwise it would run forever) because you have to trace through your programming language's variables into the solvers variables. If a solver can say "no, this definitely isn't safe" most tools are smart enough to pull the reasoning for "definitively not safe" out into a path through the program that the programmer can study.
On the other end of the spectrum are tools like coq and why3. They do very little automatically and require you, the programmer, to specify in painstaking detail why your program is okay. For an example of what I mean by "painstaking" the theorem prover coq could say to you "okay, I know that x = y, and that x and y are natural numbers, but what I don't know is if y = x." You have to tell coq what axiom, from already established axioms, will show that x = y implies y = x.
Surely there's room for some compromise, right? Well, this is an active area of research. I am working on projects that try to strike a balance between these two design points, as are many others, but unlike the GP I don't think there's anything to be that excited about yet.
There's a lot of problems with existing work and applying it to the real world. Tools that reason about C programs in coq have a very mature set of libraries/theorems to reason about memory and integer arithmetic but the libraries they use to turn C code into coq data structures can't deal with C code constructs like "switch." Tools that do verification at the SMT level are frequently totally new languages, with limited/no interoperability with existing libraries, and selling those in the real world is hard.
It's unlikely that any of this will change in the near term because the community of people that care enough about their software reliability is very small and modestly funded. Additionally, making usable developer tools is an orthogonal skill from doing original research, and as a researcher, I both am not rewarded for making usable developer tools, and think that making usable developer tools is much, much harder than doing original research.
It sadly also depends a lot on the solver used and the way the problem was encoded in SMT. For a class in college I once tried to solve Fillomino puzzles using SMT. I programmed two solutions, one used a SAT encoding of Warshall's algorithm and another constructed spanning trees. One some puzzles the first solver required multiple hours whereas the second only needed a few seconds, while on other puzzles it was the complete opposite. My second encoding needed on hours for a puzzle which I could solve by hand in literally a few seconds. SAT and SMT solvers are extremely cool, but way incredibly unpredictable.
It's frustrating because this stuff really works. Making it work probably doesn't have to be hard, but researchers that know both about verification and usability basically don't exist. I blame the CS community's disdain for HCI as a field.
I had heard about Dafny but hadn't seen the tutorial!
> Additionally, making usable developer tools is an orthogonal skill from doing original research, and as a researcher, I both am not rewarded for making usable developer tools, and think that making usable developer tools is much, much harder than doing original research.
When you're saying they're orthogonal, are you effectively saying that researchers generally don't have 'strong programming skills' (as far as actually whacking out code). If so, how feasible would it be for someone who is not a researcher, but a good general software engineer, to work on the developer tools side of things?
I think that this keeps most researchers away from making usable tools. It's hard, they're not rewarded for making software artifacts, they're maybe not as good at it as they are at other things.
I think it's feasible for anyone to work on the developer tools side of things, but I think it's going to be really hard for whoever does it, whatever their background is. There are lots of developer tool development projects that encounter limited traction in the real world because the developers do what make sense for them, and it turns out only 20 other people in the world think like them. The more successful projects I've heard about have a tight coupling between language/tool developers, and active language users. The tool developers come up with ideas, bounce them off the active developers, who then try to use the tools, and give feedback.
This puts the prospective "verification tools developer" in a tight spot, because there are only a few places in the world where that is happening nowadays: Airbus/INRIA, Rockwell Collins, Microsoft Research, NICTA/GD. So if you can get a job in the tools group at one of those places, it seems very feasible! Otherwise, you need to find some community or company that is trying to use verification tools to do something real, and work with them to make their tools better.
Compilers, in particular optimising compilers are notoriously buggy,
see John Regehr's blog. An old dream was to verify them. The great
Robin Milner, who pioneered formal verification (like so much else),
said in his 1972 paper Proving compiler correctness in a mechanized
logic about all the proofs they left out "More than half of the
complete proof has been machine checked, and we anticipte no
difficulty with the reminder". Took a while before X. Leroy filled in
the gaps. I though it would take a lot longer before we would get
something as comprehensive as CakeML, indeed I had predicted this
would only appear around 2025.
It sucks when that's in your type system
making usable developer tools is much,
much harder than doing original research.
In the best case, deciding formulae like A -> B is NP-complete, but
typically much worse. Moreover, program verification of non-trivial
programs seems to trigger the hard cases of those NP-complete (or
worse) problems naturally. Add to that the large size of the involved
formulae (due to large programs), you have a major theoretical problem
at hand, e.g. solve SAT in n^4, or find a really fast approximation
algorithm. That's unlikely to happen any time soon.
We don't even know how effectively to parallelise SAT, or to make SAT
fast on GPUs. Which is embarrassing, given how much of deep
learning's recent successes boil down to gigantic parallel computation
at Google scale. Showing that SAT is intrinsically not parallelisable,
or even just GPUable (should either be true), looks like a difficult
theoretical problem .
as a researcher, I both
am not rewarded for
he community of people that care enough
about their software reliability is very
small and modestly funded.
You can think of program correctness like the speed of light. You can
get arbitrarily close but the closer you get the more energy (cost)
you have to expend. Type-checking and a good test suite already catch
most of the low-handing fruit that the likes of Facebook and Twitter need to worry about . As of 2017, for all but the most safety
critical programs, the cost of dealing with the remaining problems
does is disproportionate in comparison with the benefits. Instagram or
Whatsapp or Gmail are good enough already despite not been formally
Cost/benefit will change only if the cost of formal verification is
brought down, or the legal frameworks (liability laws) are changed so that software
producers have to pay for faulty software (even when it's not an
Airbus A350 autopilot).
I know that some verification companies are lobbying governments for such legislative changes. Whether that's a good think, regulatory capture or something in-between, is an interesting question.
Another dimension of the problem is developer education. Most (>99%) of contemporary programmers lack the necessary
background in logic even to think properly about program
correctness. Just ask the average programmer about loop invariants
and termination order. They won't be able to do this even for 3-line
programs like GCD. This is not a surprise as there is no industry demand for this kind of knowledge, and will probably change with a change in demand.
I do think that making verification tools easier is something that researchers could and should be thinking about. Probably not verification and logic researchers directly, but someone should be carefully thinking about it and systematically exploring how we can look at our programs and decide they do what we want them to do. I have some hope for the DeepSpec project to at least start us down that path.
I also have hope for type-level approaches where the typechecking algorithms are predictable enough to avoid the "Z3 in my type system" problem but expressive enough that you can get useful properties out of them. I think this is also a careful design space and another place that researchers lose because they don't think about usability. They just say "oh, well I'll generate these complicated SMT statements and discharge them with Z3, all it needs to work on are the programs for my paper and if it doesn't work on one of them, I'll find one it does work on and swap it out for that one." Why would you make a less expressive system if usability wasn't one of your goals?
Don't know much about it, but verum.com claims 50% reduction in development costs.
There's been a renaissance of study in placebo effects, meditation, and general frameworks for how people change belief for therapeutic purposes or otherwise, but to me, that's been going on for a long time and is more about acceptance than being a new development.
One of the most exciting developments that's been coming out recently is playing with language to do what's called context-free conversational change.
Essentially, you can help someone solve an issue without actually knowing the details or even generally what they need help with. It's like homomorphic encryption for therapy. A therapist can do work, a client can report results, but the problem itself can be a black box along with a bit of the solution as well since much of the change is unconscious.
It works better with feedback (a conversation) of course, but often can be utilized in a more canned manner if you know the type of problem well enough.
I'm working on putting together an automated solution that's based on some loose grammar rules, NLP, Markov chains, and anything else I can use to help a machine be creative in language to help people solve their own problems, but as a first step as a useful tool for beginner therapists to help them get used to the ideas and frameworks with language to use.
So essentially, I'm getting a good chunk of the way toward hacking on a machine that can reliably work on people's problems without having to train a full AI or anything remotely resembling real intelligence, just mimicking it.
Before you go thinking, "Didn't they do that with Eliza?" Well yes, in a way, but my implementation is using an entirely different approach.
with all due respect, said politely, it is my opinion that you are a charlatan.
I wasn't interested in long citations or garnering proof of my work in particular with training a machine to do this work. I simply wished to add to this thread and did so, in order to show someone out there, maybe even you, what else is going on that is exciting in my little corner of the world.
I'm not that good of a programmer, so it's not in a state that it does work yet. I hope my original comment didn't suggest otherwise, but let me be perfectly clear here: I have no working machine implementation that can do what I want yet. It can work with simple canned responses like Eliza, but it's not enough. I am working on employing all of the techniques and tools mentioned, but progress is slow.
However, this is work and change I employ daily with my clients professionally and I can assure you that it does work.
You don't even have to take my word for it.
Consider....seriously consider: who would you not be if you weren't you?
If you thought about that one for a sec and felt a little spaced out for a second, you did very well.
If you came up with something quickly like "me" and didn't really actually consider the question, allow me to pose another to you. Again, seriously consider this. Read it a few times. Imagine emphasis on different words each time.
Who are you not without that problem you are interested in solving?
This work can be made more difficult by text only and seriously asynchronous communication, which is why I mentioned it being easier within conversation.
If you are interested in more, google "mind bending language" or "attention shifting coaching" and find Igor Ledochowski and John Overdurf. Their work has helped me change the lives of thousands.
> You don't even have to take my word for it.
Honest question: how not?
> who would you not be if you weren't you?
Depending on how you parse the sentence, either "someone else" or "that's just a paradox". Essentially the concept of "me" as an entity is fundamentally flawed.
Playing with the meanings of "me" and "not me" in a subjunctive form doesn't make the question very interesting (as in non-trite), to be honest. I guess the intent is not to be fresh but to be thought-provoking or similar, or setting the listener in a certain mindset? Still, sets my mind in the "meh" state.
> Who are you not without that problem you are interested in solving?
I'm not my problems. I'm also not not-my-problems. Actually I am not (I isn't?). I don't see how this helps with anything, though.
Either way, your questions pose (to me) more philosophical thinking (which I already do, anyways) than mindbending or whatever. Maybe my mind is already bent... and I have to say it didn't go very well ;)
A long time ago I came to the conclusion that these questions are merely shortcomings in how language and cognition works. Metaphysics, ontology (and even epistemology) are just fun puzzles with no solution, which I'm ultimately obliged to answer with "who the f--- cares".
Kant was right.
Not that anything you said is directly contradicted by Kant. In fact I'd say it fits very well within the idea that "human mind creates the structure of human experience". It's just never been really useful to me in any way. I really, really, want to know more of (and even believe in) your changework but, often being presented with vague ideas, no one has ever made a solid case on how it isn't, as GP said, charlatanry.
This is true of basically every post in this thread. If you're interested in learning more, there are more friendly ways of asking.
Re: friendliness -- I believe I expressed the opinion that someone is a charlatan in as friendly of a manner as is possible.
Igor Ledochowski - http://hypnosistrainingacademy.com
John Overdurf - http://JohnOverdurf.com
As far as context-free therapy goes, that's a bit of an advanced subject, but can be learned and mastered through some of their programs.
The key tenets are simple though. As a model, consider that human language builds around 5 concepts: Space, Time, Energy, Matter, and Identity. These 5 also map cleanly to questions (5Ws and H) and language predicates in human language. Space is Where, Time is When, Energy is How, Matter is What, and Identity carries two with Why and Who.
Every problem you've ever had is built up of some combination of the 5 in a specific way, unique to you.
The pattern of all change is this:
1) Associate to a problem, or in other words, bring it to mind.
2) Dissociate from the problem, or basically get enough distance from it so that you can think rationally and calmly. Similar to a monkey not reaching for a banana when a tiger is running after it, your brain does not do change under danger and stress well. It can, but that usually leads to problems in the first place.
3) Associate (think about, experience) a resource state. Another thought or experience that will help with this one, for example if someone were afraid of clowns, I'd ask a question like, "What clowns fear you?" It usually knocks them out of the fear loop for a second.
4) While thinking about the resource, recall the problem and see how it has changed. Notice I said has changed. It always changes. You can never do your problem the same again. Will this solve things on the first go? Maybe. Maybe not, but it's enough to get a foothold and a new direction and loop until it's done.
Which is what makes this fun and exciting to do in person and fun and exciting to help teach a machine to mimic it to.
That's why I made my original comment. Maybe you're not a charlatan, in which case I'd have to conclude you're thinking irrationally and have been deceived by some form of magical thinking.
You have not proposed any mechanism by which these steps can form a consistent treatment for problems that individuals have struggled with for years. You've merely declared that it will, and a whole lot of faith is required.
Other posts in this thread mostly propose a mechanism, even if we readers don't have the prerequisites to fully understand it. For example, consider the proposal that machine learning could be applied to the mundane tasks a radiologist performs. It may or may not pan out, but it has a basis.
Basically, what we do is based on how we see things. If we can change how we see things, then new actions & results become available.
Then the question just becomes, how can we change how we see things.
If how we see something comes from what we've experienced, then introducing a new experience can have us see it differently.
If how we see something comes from what we think about it, then we can introduce a new thought about it.
The point being to change the internal mental model related to the thing, so that we see it differently, we experience it differently, it occurs for us differently than it did before.
In the case above, step 3 introduced a new thought and internal experience related to the thing, and thus the step between 3 and 4 is, "their internal mental model, connected to the thing, changed".
Again, the mechanism (and the missing step) becomes, "change how we see & experience something, change our internal model relating to it". And then, some possibilities for triggering that include having a new thought about it, having a new experience about it; and various techniques can exist for introducing those experiences or thoughts.
At least, that's how I see it (how it occurs for me, how I've experienced it).
Lambdas are lightweight function calls that can be spawned on demand in sub-millisecond time and don't need a server that's constantly running. They can replace most server code in many settings, e.g. when building REST APIs that are backed by cloud services such as Amazon DynamoDB.
I've heard many impressive things about this way of designing your architecture, and it seems to be able to dramatically reduce cost in some cases, sometimes by more than 10 times.
The drawback is that currently there is a lot of vendor lock-in, as Amazon is (to my knowledge) the only cloud service that offers lambda functions with a really tight and well-working integration with their other services (this is important because on their own lambdas are not very useful).
Your input is tightly restricted, and with Amazon in particular, easy to break before you even get to the Lambda code (the Gateway is fragile in silly ways). Your execution schedule is tightly controlled by factors outside your control - such as the "one Lambda execution per Kinesis shard". You can be throttled arbitrarily, and when it just fails to run, you are limited to "contact tech support".
In short, I can't trust that Lambda and its ilk are really designed for my use cases, and so I can only really trust it with things that don't matter.
Auth0 has Web Tasks: https://webtask.io/
Am sure there are many more implementations out there. Agree that vendor lock-in is always a concern.
But the reality is that they don't, with cold-start times upward of 30 seconds. If you use them enough to avoid the cold-start penalties, then you're better of with reserved instances because lambdas are 10x the price. If you can't handle the 30 second penalty then you're better off with reserved instances because they're always on. If you have rare and highly latency-tolerable events, then use lambda.
There is no cutting edge with serverless on AWS.
And within the wider space of blockchains, improving access to strong anonymization techniques appears to be moving forward quickly: https://blog.ethereum.org/2017/01/19/update-integrating-zcas...
The original expectation was to gradually increase the block size to increase capacity as more users joined the network, eventually transitioning most users to "thin" clients that don't store the (eventually enormous) complete blockchain.
The Core devs right now feel that the current situation (every node a full peer with the complete chain, but maxed out capacity and limited throughput) is preferable for a number of reasons including decentralization, while the Unlimited devs feel that it's time to increase the block size in order to increase capacity and get more users on the network, among other things.
Decisions like this are usually decided by the miner network reaching consensus, with votes counted through hashing power/mined blocks. I'm not sure where things stand at the moment, but it's been interesting to observe.
I understand it's become a rather contentious topic in the community.
IIUC, segwit makes certain kinds of complicated transactions easier to handle (ones with lots of inputs/outputs), possibly allowing more transactions to fit in less space, and lays useful groundwork for overlay networks like Lightning. I think the thinking is that overlay networks can be fast, and eventualy reconcile against the slower bitcoin network.
Unlimited would rather just scale up the bitcoin network in place, instead of relying on an overlay network.
You'd probably get better information from bitcoincore.org and bitcoinunlimited.info, or the subreddits /r/bitcoin and /r/btc (for core and unlimited, respectively, they split after moderator shenanigans in /r/bitcoin).
With the brutalist movement something new started. People went back to code editors to create websites by hand skipping third-party, non-web-native user interface design tools prefilled with common knowledge making websites looking uniform.
The idea of design silos and brand-specific design thinking is dropped: no more bootstrap, flat design, material design, etc.
It's like back to the nineties and reinventing web design. You start from scratch, on your own, and build bottom up without external influence and or help.
It's about creativity vs. the bandwagon, about crafting your own instead of putting together from popular pieces.
All a great boon for UX, while being easier to design.
For a brutalist example, let's take HN. It works. Plus, I'm sure the code is quite simple.
The emphasis is on dense content with simple links, and there's not a lot of "live" interactive content on the page. I don't find either site to be particularly ugly or visually offensive, contrary to many of the linked "brutalist" sites.
I'd love to see more sites in the HN/reddit model (here's hoping reddit's coming desktop redesign doesn't lose that), but I wouldn't want to actually use more brutalist sites (outside of individual creative expression, anyway).