Ask HN: What is the most exciting development in your field right now? - yellow_viper
======
aabajian
I'm entering radiology residency, and I'm very pro-automation / machine
learning. There's a contentious debate in the field about whether radiologists
will be replaced: [https://forums.studentdoctor.net/threads/will-ai-replace-
rad...](https://forums.studentdoctor.net/threads/will-ai-replace-
radiologists.1210227/)

HackerNews is very developer-focused. If you guys saw what a radiologist does
on a 9-5 basis you'd be _amazed_ it hasn't already been automated. Sitting
behind a computer, looking at images and writing a note takes up 90% of a
radiologist's time. There are innumerable tools to help radiologists read more
images in less time: Dictation software, pre-filled templates, IDE-like
editors with hotkeys for navigating reports, etc. There are even programs that
automate the order in which images are presented so a radiologist can read
high-complexity cases early, and burn through low-complexity ones later on.

What's even more striking is that the field of radiology is _standardized_ ,
in stark contrast to the EMR world. All images are stored on PACS which
communicate using DICOM and HL7. The challenges to full-automation are gaining
access to data, training effective models, and, most importantly, driving user
adoption. If case volumes continue to rise, radiologists will be more than
happy to automate additional steps of their workflow.

Edit: A lot of push back from radiologist is in regards to the feasibility of
automated reads, as these have been preached for years with few coming to
fruition. I like to point out that the deep learning renaissance in computer
vision started in 2012 with AlexNet; this stuff is _very_ new, more effective,
and quite different than previous models.

~~~
neves
20 years ago I did some software to analyze satellite images of the Amazon to
monitor deforestation. We got a result that matched the quality of human
experts. The problem has always been political and economical, not
technological.

~~~
spuz
I'm not sure if there was more to your story that you missed. Was your
software successful? Is it in use today? Is automation widespread in the study
of Amazonian deforestation?

~~~
tomsmeding
Software not being used all over the place does not mean the software is bad,
generally.

------
onion2k
My field is web development, and, to be honest, the most exciting thing going
on is that more people are starting to complain about the complexity of
development. Hopefully this will lead to people slowing down and learning how
to write better web software.

As an example, one survey ([https://ashleynolan.co.uk/blog/frontend-tooling-
survey-2016-...](https://ashleynolan.co.uk/blog/frontend-tooling-
survey-2016-results#js-testing)) put the number of developers who _don 't use
any test tools_ at almost 50%. In the same survey about 80% of people stated
their level of JS knowledge was Intermediate, Advanced or Expert.

~~~
greggman
There's a market imo for a full solution that includes front end and entire
backend including deployment, seemless scaling, seemless upgrading, seemless
backups, seemless local dev, seemless staging, etc.

99% of web apps need the same features but most of this is still up to
manually rolling your own.

I should be able to clone some repo, enter some DO/AWS/GOOG keys and push.

~~~
acid__
I've been using and loving Zappa[1] lately. Basically it lets you seamlessly
deploy a flask app to AWS Lambda -- that solves your deployment, scaling,
upgrading, staging, backups, etc. And local dev is just running the flask app
locally.

[1] [https://github.com/Miserlou/Zappa](https://github.com/Miserlou/Zappa)

------
gnur
Container orchestrators becoming mainstream is something I'm very excited
about. Tools like DC/OS, Nomad, Kubernetes, Docker Swarm Mode, Triton, Rancher
make it so much easier to have fast development cycles. Last week I went from
idea, to concept, to deployed in production in a single day. And it is
automatically kept available, restarted if it fails, traffic is routed
correctly, other services can discover it, the underlying infrastructure can
be changed without anyone ever noticing it.

This also brings me to Traefik, one of the coolest projects I have come across
in the last months.

Traefik + DC/OS + CI/CD is what allows developers to create value for the
business in hours and not in days or weeks.

~~~
stkrzysiak
I've been researching container orchestration recently and I personally don't
see the incentive to jump into containers from an infrastructure perspective.
I think using packer/vagrant/ansible is pretty easy and meets my needs. The
orchestration overhead for containers seems like overhead I don't need just
yet. So the big question I've been asking myself is at what point will a AWS
AMI be less versatile than a docker container, assuming it originated w/
Packer and I can build images to other clouds with packer. From a developer
perspective I am very excited about containers and believe local dev w/ docker
is warranted.

~~~
gnur
We mainly use Docker because it finally allows us to eradicate all the "Worked
in dev" issues we had in the past. From an application perspective, Dev,
Accept and Prod are identical.

Also, we deploy to production at least 4 times a day, the time from commit to
deployable to production is about 30 minutes. And because it is a container it
will start with a clean, documented setup (Dockerfile) every time. There is no
possibility of manual additions, fixes or handholding.

------
sarthakjain
Deep learning architectures built by machines (so we no longer have to design
architecture to solve problems)
[https://arxiv.org/abs/1611.01578](https://arxiv.org/abs/1611.01578)

Transfer Learning (so we need less data to build models)
[http://ftp.cs.wisc.edu/machine-learning/shavlik-
group/torrey...](http://ftp.cs.wisc.edu/machine-learning/shavlik-
group/torrey.handbook09.pdf)

Generative adversarial networks (so computers can get human like abilities at
generating content) [https://papers.nips.cc/paper/5423-generative-adversarial-
net...](https://papers.nips.cc/paper/5423-generative-adversarial-nets)

~~~
Phait
All these are definitely cool, but I think we're still a long way from leaving
the "look at this cool toy" status and stepping into the "I can add value to
society" status.

Furthermore, if we consider that most of these DL paper completely ignore the
fact that the nets must run for days on a GPU to get decent results, then
everything appears way less impressive. But that's just my opinion. I love
working in deep learning, but we still have __LOTS __of work to do.

~~~
iandanforth
Could you elaborate? After running for days / weeks/ months the output is a
net that can do inference in seconds, or with some now-common techniques
milliseconds with only small reductions in accuracy. These nets can then be
deployed to phones to solve a rapidly increasing number of identification
tasks, everything from plants to cancer.

The time from theoretical paper to widely deployed app is smaller in DL than
in any other field _I_ have experience with.

------
siddboots
It's all subjective, but as a data analyst I'm excited about probabilistic
databases. Short version: load your sample data sets, provide a some priors,
and then query the population as if you had no missing data.

Most developed implementation is BayesDB[1], but there's a lot of ideas coming
out of a number of places right now.

[1]
[http://probcomp.csail.mit.edu/bayesdb/](http://probcomp.csail.mit.edu/bayesdb/)

~~~
abhishivsaxena
Interesting. Do you know anything about agent modelling? Any ideas how if/it
ties to it?

~~~
siddboots
The agent modelling that I'm aware of is in simulation. I have a feeling that
there would be a lot of interesting duality between the fields of agent based
simulation and monte-carlo based probabilistic modelling, but I don't know
enough about the former to say off hand.

~~~
mikhailfranco
ABM is an MC method, because different individual agents randomize their
behavior based on distributions associated with possible courses of action
defined by their agent type.

------
mafribe
_Compilation:_

\- Meta-tracing, e.g. PyPy.

\- End-to-end verification of compilers, e.g. CompCert and CakeML.

 _Programming languages:_

\- Mainstreamisation of the ideas of ML-like languages, e.g. Scala, Rust,
Haskell, and the effect these ideas have on legacy languages, e.g. C++, Java
9, C#.

\- Beginning of use of resource types outside pure research, e.g. affine types
in Rust and experimental use of session types.

 _Foundation of mathematics:_

\- Homotopy type theory.

\- Increasing mainstreamisation of interactive theorem provers, e.g.
Isabelle/HOL, Coq, Agda.

 _Program verification:_

\- Increasing ability to have program logics for most programming language
constructs.

\- Increasingly usable automatic theorem provers (SAT and SMT solvers) that
just about everything in automated program verification 'compiles' down to.

~~~
bem94
I work in CPU design. So I'd add that the tools for formally verifying CPUs
have come a very long way in the last two years, and the next two years look
like they will be very exciting indeed.

~~~
kbradero
wow ! what tools are you guys using ?, do you have the same for microcode ?
this is really interesting !

------
bitshaker
My field is hypnosis, or more generally, "changework" which is jargon, but
essentially hacking the psychology of clients to get desired outcomes.

There's been a renaissance of study in placebo effects, meditation, and
general frameworks for how people change belief for therapeutic purposes or
otherwise, but to me, that's been going on for a long time and is more about
acceptance than being a new development.

One of the most exciting developments that's been coming out recently is
playing with language to do what's called context-free conversational change.

Essentially, you can help someone solve an issue without actually knowing the
details or even generally what they need help with. It's like homomorphic
encryption for therapy. A therapist can do work, a client can report results,
but the problem itself can be a black box along with a bit of the solution as
well since much of the change is unconscious.

It works better with feedback (a conversation) of course, but often can be
utilized in a more canned manner if you know the type of problem well enough.

I'm working on putting together an automated solution that's based on some
loose grammar rules, NLP, Markov chains, and anything else I can use to help a
machine be creative in language to help people solve their own problems, but
as a first step as a useful tool for beginner therapists to help them get used
to the ideas and frameworks with language to use.

So essentially, I'm getting a good chunk of the way toward hacking on a
machine that can reliably work on people's problems without having to train a
full AI or anything remotely resembling real intelligence, just mimicking it.

Before you go thinking, "Didn't they do that with Eliza?" Well yes, in a way,
but my implementation is using an entirely different approach.

~~~
charlieflowers
fancy words, exciting claims, and absolutely no detail whatsoever.

with all due respect, said politely, it is my opinion that you are a
charlatan.

~~~
bitshaker
Thank you for expressing your opinion.

I wasn't interested in long citations or garnering proof of my work in
particular with training a machine to do this work. I simply wished to add to
this thread and did so, in order to show someone out there, maybe even you,
what else is going on that is exciting in my little corner of the world.

I'm not that good of a programmer, so it's not in a state that it does work
yet. I hope my original comment didn't suggest otherwise, but let me be
perfectly clear here: I have no working machine implementation that can do
what I want yet. It can work with simple canned responses like Eliza, but it's
not enough. I am working on employing all of the techniques and tools
mentioned, but progress is slow.

However, this is work and change I employ daily with my clients professionally
and I can assure you that it does work.

You don't even have to take my word for it.

Consider....seriously consider: who would you not be if you weren't you?

If you thought about that one for a sec and felt a little spaced out for a
second, you did very well.

If you came up with something quickly like "me" and didn't really actually
consider the question, allow me to pose another to you. Again, seriously
consider this. Read it a few times. Imagine emphasis on different words each
time.

Who are you not without that problem you are interested in solving?

This work can be made more difficult by text only and seriously asynchronous
communication, which is why I mentioned it being easier within conversation.

If you are interested in more, google "mind bending language" or "attention
shifting coaching" and find Igor Ledochowski and John Overdurf. Their work has
helped me change the lives of thousands.

~~~
kaoD
I'm not GP but I'll give it a try:

> You don't even have to take my word for it.

Honest question: how not?

> who would you not be if you weren't you?

Depending on how you parse the sentence, either "someone else" or "that's just
a paradox". Essentially the concept of "me" as an entity is fundamentally
flawed.

Playing with the meanings of "me" and "not me" in a subjunctive form doesn't
make the question very interesting (as in non-trite), to be honest. I guess
the intent is not to be fresh but to be thought-provoking or similar, or
setting the listener in a certain mindset? Still, sets my mind in the "meh"
state.

> Who are you not without that problem you are interested in solving?

I'm not my problems. I'm also not not-my-problems. Actually I _am_ not (I _isn
't_?). I don't see how this helps with anything, though.

Either way, your questions pose (to me) more philosophical thinking (which I
already do, anyways) than mindbending or whatever. Maybe my mind is already
bent... and I have to say it didn't go very well ;)

A long time ago I came to the conclusion that these questions are merely
shortcomings in how language and cognition works. Metaphysics, ontology (and
even epistemology) are just fun puzzles with no solution, which I'm ultimately
obliged to answer with "who the f--- cares".

Kant was right.

Not that anything you said is directly contradicted by Kant. In fact I'd say
it fits very well within the idea that "human mind creates the structure of
human experience". It's just never been really useful to me in any way. I
really, really, want to know more of (and even believe in) your changework
but, often being presented with vague ideas, no one has ever made a solid case
on how it isn't, as GP said, charlatanry.

------
ThePhysicist
Not really my main field, but in web technology it seems that severless
architectures such as Amazon Lambda will be a pretty big game changer in the
near future:

Lambdas are lightweight function calls that can be spawned on demand in sub-
millisecond time and don't need a server that's constantly running. They can
replace most server code in many settings, e.g. when building REST APIs that
are backed by cloud services such as Amazon DynamoDB.

I've heard many impressive things about this way of designing your
architecture, and it seems to be able to dramatically reduce cost in some
cases, sometimes by more than 10 times.

The drawback is that currently there is a lot of vendor lock-in, as Amazon is
(to my knowledge) the only cloud service that offers lambda functions with a
really tight and well-working integration with their other services (this is
important because on their own lambdas are not very useful).

~~~
falcolas
I have to admit, I'm pretty bearish when it comes to serverless. Mostly
because it's an abstraction which leaks to hell and back.

Your input is tightly restricted, and with Amazon in particular, easy to break
before you even get to the Lambda code (the Gateway is fragile in silly ways).
Your execution schedule is tightly controlled by factors outside your control
- such as the "one Lambda execution per Kinesis shard". You can be throttled
arbitrarily, and when it just fails to run, you are limited to "contact tech
support".

In short, I can't trust that Lambda and its ilk are really designed for my use
cases, and so I can only really trust it with things that don't matter.

~~~
nstj
I'm bearish on it right now, though conceptually it's a fantastic idea which
just has quite a way to go before it's ready for prime time. I definitely
think a lot of people have jumped the gun by pushing serverless before it's
really ready for the outside world.

------
nadaviv
In the Bitcoin space, I'm most excited about the Lightning Network [0][1] and
MimbleWimble [2][3], which are in my view the two most groundbreaking
technologies that really push the limits of what blockchains are capable of.

[0]
[https://en.bitcoin.it/wiki/Lightning_Network](https://en.bitcoin.it/wiki/Lightning_Network)

[1] [https://lightning.network/](https://lightning.network/)

[2]
[https://download.wpsoftware.net/bitcoin/wizardry/mimblewimbl...](https://download.wpsoftware.net/bitcoin/wizardry/mimblewimble.txt)

[3] [https://bitcoinmagazine.com/articles/mimblewimble-how-a-
stri...](https://bitcoinmagazine.com/articles/mimblewimble-how-a-stripped-
down-version-of-bitcoin-could-improve-privacy-fungibility-and-scalability-all-
at-once-1471038001/)

~~~
jbpetersen
To add to that, the escalating hashrate war within Bitcoin between Unlimited
and Core is popcorn worthy.

And within the wider space of blockchains, improving access to strong
anonymization techniques appears to be moving forward quickly:
[https://blog.ethereum.org/2017/01/19/update-integrating-
zcas...](https://blog.ethereum.org/2017/01/19/update-integrating-zcash-
ethereum/)

~~~
mrfusion
What's the unlimited thing?

~~~
Crespyl
It has to do with the need for increased block sizes. Right now, each block
(chunk of validated transactions) can only be 1MB in size. This restricts the
total throughput of the network, but keeps the total size of the blockchain
down and the growth rate low.

The original expectation was to gradually increase the block size to increase
capacity as more users joined the network, eventually transitioning most users
to "thin" clients that don't store the (eventually enormous) complete
blockchain.

The Core devs right now feel that the current situation (every node a full
peer with the complete chain, but maxed out capacity and limited throughput)
is preferable for a number of reasons including decentralization, while the
Unlimited devs feel that it's time to increase the block size in order to
increase capacity and get more users on the network, among other things.

Decisions like this are usually decided by the miner network reaching
consensus, with votes counted through hashing power/mined blocks. I'm not sure
where things stand at the moment, but it's been interesting to observe.

I understand it's become a rather contentious topic in the community.

~~~
mrfusion
Thanks for the great explanation! So where does segwit come into this?

~~~
Crespyl
I'm not the best person to ask, and I don't fully understand segwit, but I
think it's the Core devs (partial) solution to the problem of scaling up the
network without increasing block size.

IIUC, segwit makes certain kinds of complicated transactions easier to handle
(ones with lots of inputs/outputs), possibly allowing more transactions to fit
in less space, and lays useful groundwork for overlay networks like Lightning.
I think the thinking is that overlay networks can be fast, and eventualy
reconcile against the slower bitcoin network.

Unlimited would rather just scale up the bitcoin network in place, instead of
relying on an overlay network.

You'd probably get better information from bitcoincore.org and
bitcoinunlimited.info, or the subreddits /r/bitcoin and /r/btc (for core and
unlimited, respectively, they split after moderator shenanigans in
/r/bitcoin).

------
csbartus
New aesthetics in web design.

With the brutalist movement something new started. People went back to code
editors to create websites by hand skipping third-party, non-web-native user
interface design tools prefilled with common knowledge making websites looking
uniform.

The idea of design silos and brand-specific design thinking is dropped: no
more bootstrap, flat design, material design, etc.

It's like back to the nineties and reinventing web design. You start from
scratch, on your own, and build bottom up without external influence and or
help.

It's about creativity vs. the bandwagon, about crafting your own instead of
putting together from popular pieces.

[http://brutalistwebsites.com](http://brutalistwebsites.com)

~~~
dandare
Isn't it just a fad? How usable and readable is the brutalist design? Or what
are you trying to maximise other than being different?

~~~
jstimpfle
A fad? Don't you see a problem with many of today's websites? Both from a
developer's and a user's perspective.

~~~
pitaj
Sure, many "modern" sites have terrible UX, but that doesn't mean that
minimalist or "brutalist" designs are intrinsically any better.

~~~
jstimpfle
At least they load quickly. They don't hog you CPU. And they don't mess what
you can Ctrl+F.

All a great boon for UX, while being easier to design.

------
kejaed
Aerospace Engineer - Enhanced Flight Vision Systems

TLDR: Fancy fused infrared (LWIR/SWIR) and visible spectrum camera systems may
'soon' be on a passenger airliner near you.

Using infrared cameras to see through fog/haze to land aircraft has been
happening for a while now, but only on biz jets or on FedEx aircraft with a
waiver. The FAA has gained enough confidence in the systems that they have
just opened up the rules to allow these camera systems to be used to land on
passenger aircraft.

Combine that with the fact that airports are transitioning away from
incandescent lights to LEDs (meaning a purely IR sensor system is not longer
enough), and you get multi-sensor image fusion work to do and a whole new
market to sell them to.

Here is a blog post (from a competitor of ours) talking about the new rules.

[https://blogs.rockwellcollins.com/2017/01/17/worth-the-
wait-...](https://blogs.rockwellcollins.com/2017/01/17/worth-the-wait-faas-
new-efvs-rule-far-91-176/)

~~~
anfractuosity
That sounds very interesting!

Say with a car that has a heads up display for night vision, if it had an SWIR
sensor and IR lights, can that cut through fog too? Or is it the LWIR that is
able to do that?

~~~
kejaed
SWIR sensors are there for hot burning lights. LWIR (aka thermal) sees things
that are every day temperatures. Both wavelengths have better transmittance
through fog than visible light so we say those sensors can 'see through' fog.
The physics comes down to the wavelength of the light vs the size of the
molecules of the medium the light is trying to get through [1].

Another fun part is that fog at one airport can be different than fog at
another, so while the weather conditions at both locations may say visibility
is "Runway Visible Range (RVR) 1000ft", that is for a pilots eyes, and the
same camera may work just fine at one location and not at all at the other.

[1]
[https://upload.wikimedia.org/wikipedia/commons/e/e9/Atmosphe...](https://upload.wikimedia.org/wikipedia/commons/e/e9/Atmospheric.transmittance.IR.jpg)

------
wolfram74
The era of gravity wave astronomy is starting to begin in earnest with LIGO's
new run on data collection. It'd be offline getting upgraded from 2016/02 to
2016/11 and is now even more sensitive
[[http://www.ligo.org/news/](http://www.ligo.org/news/)]

------
espeed
The convergence of 3 big ideas in graph computing:

1\. D4M: Dynamic Distributed Dimensional Data Model

[http://www.mit.edu/~kepner/D4M/](http://www.mit.edu/~kepner/D4M/) GraphBLAS:
[http://graphblas.org](http://graphblas.org)

Achieving 100M database inserts per second using Apache Accumulo and D4M
[https://news.ycombinator.com/item?id=13465141](https://news.ycombinator.com/item?id=13465141)

MIT D4M: Signal Processing on Databases [video]
[https://www.youtube.com/playlist?list=PLUl4u3cNGP62DPmPLrVyY...](https://www.youtube.com/playlist?list=PLUl4u3cNGP62DPmPLrVyYfk3-Try_ftJJ)

2\. Topological / Metric Space Model

Fast and Scalable Analysis of Massive Social Graphs
[http://www.cs.ucsb.edu/~ravenben/temp/rigel.pdf](http://www.cs.ucsb.edu/~ravenben/temp/rigel.pdf)

Quantum Processes in Graph Computing - Marko Rodriguez [video]
[https://www.youtube.com/watch?v=qRoAInXxgtc](https://www.youtube.com/watch?v=qRoAInXxgtc)

3\. Propagator Model

Revised Report on the Propagator Model
[https://groups.csail.mit.edu/mac/users/gjs/propagators/](https://groups.csail.mit.edu/mac/users/gjs/propagators/)

Constraints and Hallucinations: Filling in the Details - Gerry Sussman [video]
[https://www.youtube.com/watch?v=mwxknB4SgvM](https://www.youtube.com/watch?v=mwxknB4SgvM)

We Really Don't Know How to Compute - Gerry Sussman [video]
[https://www.youtube.com/watch?v=O3tVctB_VSU](https://www.youtube.com/watch?v=O3tVctB_VSU)

Propagators - Edward Kmett - Boston Haskell [video]
[https://www.youtube.com/watch?v=DyPzPeOPgUE](https://www.youtube.com/watch?v=DyPzPeOPgUE)

~~~
lowglow
So many good links here. Most interested in Dynamic Distributed Dimensional
Data Model.

What are you working on?

~~~
espeed
PUFR [http://pufr.io](http://pufr.io) (IoT security startup), and for the last
few years I've been doing R&D on the design of a graph computing model that
unifies some of the ideas above.

------
Curious42
As an Android developer, I'm most excited about instant apps. If it works as
marketed, you won't have to hold on to the apps which you use maybe once or
twice a week. Instead, you'll be able to download the required
feature/activity/view or whatever else on the fly.

I'm not sure I did justice to instant apps, because there's a language barrier
playing in. But here's an example: I use the Amazon app maybe once every 2
weeks, and yet it's one of the apps consuming most amount of memory on my
phone due to background services. After Amazon integrates instant apps, I'll
be able to delete the app, and just google search for the product through my
phone. The Google search will then download the required page as an app,
giving me the experience of an app, whilst not even having it on the phone.

~~~
rrrhys
This is going to sound really naive... Isn't that just a website?

~~~
Curious42
Here's a really good overview:
[https://www.youtube.com/watch?v=cosqlfqrpFA](https://www.youtube.com/watch?v=cosqlfqrpFA)

Also, to answer your question, not it's not the same as a website because it
will be a native Android app with the ability to communicate with the Android
OS, like any other Android app.

The possibility of things — in terms of improved UX — that you can accomplish
with instant apps are infinite. It all comes down to how you want to use it.

~~~
pantalaimon
I watched the video and honestly, this is terrible.

If I'm clicking on a link I want to open it with my browser, not with some
app. I find this extremely annoying with facebook and even the news carousel
already.

I can't open new tabs, copy the url, switch to other tabs like I would in the
normal browser. This is extremely confusing and I don't how this benefits me
in any way.

~~~
lj3
I couldn't agree more. I'm excited about the idea of streaming apps, but the
execution here is terrible. How do you control which url opens which app? If
somebody sends you a reddit or hn link, which app does it run? There are
dozens out there for both! The whole point of the app is not to have to manage
these things, but the only way I can see this working is if you have yet
another area in settings to manage which apps open for which links.

A better implementation would have been to have a popup with a list of
compatible apps to run, including an option to run it in a browser like any
normal link.

I really hope the NFC bit is opt-in by default. I don't want to have to
manually disable it every time I get a new phone. In fact, even if I've opted
into having the SF Park app run when I'm near a parking meter, I want the
option to "reject" it just like I do when I get an incoming phone call.

~~~
cooper12
> How do you control which url opens which app?

The website itself specifies which app should be used by publishing a Digital
Asset Links file. ([https://developer.android.com/training/app-
links/index.html](https://developer.android.com/training/app-
links/index.html))

~~~
lj3
I like that even less. If you haven't manually added an app association, it
defaults to opening the app specified in the digital assets file without any
notification to the user. This is the opposite of a sane default. The first
time an app wants to run, it should always let the user decide whether they
want to run the app or continue using the browser. Otherwise, this is a recipe
for malware.

------
RileyKyeden
I do electronic music. The rise of platforms like Bandcamp and Patreon, and
the abundance of high quality free/inexpensive tools and guides is raising the
bar for quality in independent music, and making it easier for more people to
get paid in whatever niches they prefer (vs. going for a mass audience).

~~~
kekimchi
I wwould love to get started, I've been looking for a new hobby and thought
that is perfect. How should I start? :)

~~~
RileyKyeden
Reaper: [http://www.reaper.fm/](http://www.reaper.fm/) (good tutorials:
[http://reaperblog.net/](http://reaperblog.net/))

Good synths: [https://www.kvraudio.com/product/firebird-by-
tone2-audiosoft...](https://www.kvraudio.com/product/firebird-by-
tone2-audiosoftware)

[https://www.kvraudio.com/product/synth1-by-ichiro-
toda](https://www.kvraudio.com/product/synth1-by-ichiro-toda)

Good all-purpose instrument: [https://www.kvraudio.com/product/orion-sound-
module-by-sampl...](https://www.kvraudio.com/product/orion-sound-module-by-
samplescience)

Good orchestral instruments: [http://vis.versilstudios.net/vsco-
community.html](http://vis.versilstudios.net/vsco-community.html)

A helpful article I wrote with links and basic advice for new musicians:
[https://blog.rileyreverb.com/how-to-be-a-
musician-58511c4e18...](https://blog.rileyreverb.com/how-to-be-a-
musician-58511c4e18a7)

~~~
UweSchmidt
You're describing the DAW and plugins, but wouldn't you need a midi keyboard,
a groovebox or a drum computer, to actually _make_ those beats?

[https://www.thomann.de/gb/roland_tr_8.htm](https://www.thomann.de/gb/roland_tr_8.htm)

~~~
RileyKyeden
Nope. I have a MIDI keyboard, but it doesn't work half the time, so I rarely
bother with it. You can do everything inside the DAW with the MIDI roll or
notation editor.

------
reasonattlm
Safe selective destruction of cells via their internal chemistry, not surface
markers, via uptake of lipid-encapsulated programmable suicide gene
arrangements.

With the right program and a distinctive chemistry to target in the unwanted
cell population, this flexible technology has next to no side-effects, and
enables rapid development of therapies such as:

1) senescent cell clearance with resorting to chemotherapeutics, something
shown to extend life in mice, reduce age-related inflammation, reverse
measures of aging in various tissues, and slow the progression of vascular
disease.

2) killing cancer cells without chemotherapeutics or immunotherapies.

3) destroying all mature immune cells without chemotherapeutics, an approach
that should cure all common forms of autoimmunity (or it would be surprising
to find one where it doesn't), and also could be used to reverse a sizable
fraction of age-related immune decline, that part of it caused by
malfunctioning and incorrectly specialized immune cells.

And so forth. It turns out that low-impact selective cell destruction has a
great many profoundly important uses in medicine.

~~~
kelly5
For 3) does destroying all mature immune cells also get rid of all immunities
that the patient has gained throughout life from vaccines, previous illness,
etc? Would it make the patient very fragile, not to have gone through gaining
those immunities at a young age?

~~~
reasonattlm
Revaccination, yes, definitely necessary in the idealized case of a complete
wipe of immune cells. But that's a small problem in comparison to having a
broken immune system. Just get all the vaccinations done following immune
repopulation.

Part of the problem in old people is that they have too much memory in the
immune system, especially of pervasive herpesviruses like cytomegalovirus.
Those memory cells take up immunological space that should for preference be
occupied by aggressive cells capable of action.

Another point: in old people, as a treatment for immunosenescence, immune
destruction would probably need to be paired with some form of cell therapy to
repopulate the immune system. In young people, not needed, but in the old
there is a reduced rate of cell creation - loss of stem cell function, thymic
involution, etc. That again, isn't a big challenge at this time, and is
something that can already be done.

At present sweeping immune destruction is only used for people with fatal
autoimmunities like multiple sclerosis because the clearance via chemotherapy
isn't something you'd do if you had any better options - it's pretty
unpleasant, and produces lasting harm to some degree. Those people who are now
five or more years into sustained remission of the disease have functional
immunity and are definitely much better off for the procedure, even with its
present downsides, given where they were before. If the condition is rhematoid
arthritis, however, it becomes much less of an obvious cost-benefit equation,
which is why there needs to be a safe, side-effect free method of cell
destruction.

------
tyingq
I think web assembly is the piece most likely to change front end development
in a meaningful way. A little hard to see now, as the WASM component has no
direct access to the DOM, no GC, and no polymorphic inline cache. So, dynamic
languages are hard to do with WASM. Once those gaps are closed, however, it
should be interesting to see if javascript remains the lingua franca or not.

------
dbattaglia
For a C# developer into microservices, there's a lot to be excited about.

.Net Core: Finally, cross platform .Net. Deploying .Net services to Linux is a
dream come true. Can't wait for the platform to stabilize.

Windows Server 2016: For "legacy" applications forced to stay on Windows,
containers and Docker on Windows is a game changer. One step closer to
hopefully making Windows servers somewhat manageable.

~~~
hvidgaard
I've toyed with it on and off ever since the first beta. It's still not good
enough unfortunately. I need a very simple way to configure an instance,
version it, and deploy it in minutes. When that works frictionless I'm all
over getting it pushed in my org, and once the tooling in VS supports it, it
will be easier to get other developers to do it.

------
bigger_cheese
I'm a materials engineer these are two interesting developments in my field at
the moment

Metamaterials: Essentially a material engineered to have a unique property. By
precisely controlling a materials structure you can influence how it interacts
with electromagnetic waves, sound etc. You can create materials with unique
properties such as a negative refractive index over certain wavelengths. It's
kind of a novelty but people are building "cloaking devices" using
metamaterials i.e. bending electromagnetic waves around a material in certain
ways to make it appear invisible to certain frequencies.

Graphene (and other 2D materials): These materials are a relatively recent
discovery, graphene was confirmed in 2004 and it has a number of interesting
properties. In particular its electrical and thermal properties make it
promising for a number of applications. I think it could possibly find
applications in batteries, transistors and capacitors. At the moment it is a
very expensive material to manufacture which makes it (currently) unsuited for
commercial applications. There is a heap of active research involving graphene
at the moment.

------
Seanny123
I'm honestly just super-pumped about any artificial intelligence system that's
starting to get an intuition of physics.

Google's Deep Mind put out some kind of cool stuff recentely [1], but I'm
mostly just excited for anything that Ilker Yildirim [2] is doing with Joshua
Tanenbaum, because it seems to triangulate more with how humans think about
physics. When I was at CogSci 2016, Joshua mentioned combining this with
analogical reasoning and that also sounded super cool, even though I'm not
sure how to the two fit together.

[1] [https://arxiv.org/abs/1612.00222](https://arxiv.org/abs/1612.00222) [2]
[http://www.mit.edu/~ilkery/](http://www.mit.edu/~ilkery/)

------
iagooar
On the web development part of my job, I'm excited about Elixir / Phoenix
getting more and more mindshare. People I talk to are actively trying Elixir
out and evaluating it as the tool of choice for their next projects.

On the networking side of things, I'm excited about network virtualization and
the potential that tools like Docker and Kubernetes give to virtualizing large
and complex network topologies.

And as an employee of an IT-heavy enterprise, seeing DevOps becoming a thing
makes me happy, even if adoption is slow and expectations are high. It's still
better than waiting 6 months to get a couple of VMs to deploy my projects
to...

------
pipio21
In my company we work using computers for real world applications, in the
physical world:

Regenerative medicine: understanding DNA code and restoring cells and organs,
making eternal youth possible. It will take decades of hard work.

Ending cancer: We are studding virus mutations, so we could attack them
without invasive techniques.

Nuclear fusion: We are simulating plasma physics. This is going to be enormous
in ten years or so imho.

~~~
dcgoss
What company are you referring to?

------
asafira
In my field of quantum information processing, the current hype is all about
"Quantum Supremacy". The field currently has its sights set on the goal of
producing an experiment where a quantum system performs a computation faster
than any known computer can --- perhaps computes something that no current
computer can compute in a reasonable amount of time. Unlike much of the work
in the field up to this time, this requires a cray amount of engineering, more
than a typical lab can undertake if they hope to be publishing interesting
results in the meantime. My hypothesis is that this will likely happen from
either a company (IBM, Google) or a government lab (if they will be allowed to
publish).

------
FLGMwt
As a .NET dev, .NET Core is pretty exciting.

We're porting a sizable application to .Net Core so we can be on Linux and
save cost and time on instance launch.

I'm writing an in-depth blog post series about the process because I haven't
found any significant migration stories. I'm hoping it will help a lot of
people through the process.

~~~
3minus1
mind linking to the blog?

~~~
FLGMwt
It'll be on
[http://engineering.rallyhealth.com/](http://engineering.rallyhealth.com/)
when it's done (pardon the looks, site is WIP). It was just going to be a long
post, but I'm trying to be as helpful/detailed as possible. Probably won't
publish anything until I have the whole thing done.

I don't have an exact date unfortunately, but it'll be on there sometime
before March 2nd to coincide w/ a .NET Rocks podcast. I'll share on HN though
and bump this comment when it's released : )

------
jackgolding
Web Analytics I feel is years behind data science - but tools like
[http://snowplowanalytics.com/](http://snowplowanalytics.com/) are becoming
much more widespread and are taking market share away from Google and Adobe
which is good for everyone. Free GA is still the best tool for small sites.

~~~
michaelmior
Cool! I've heard of Snowplow from a long way back but I haven't heard anything
about in the past couple years. Good to know they're still doing well.

------
phkahler
Field: embedded software. To me RISC-V is the most exciting thing for the next
few years. The performance appears to be awesome, and free CPU IP will allow
more varieties of specialized low-cost chips for specific use cases. It should
also have a positive effect on development environments by encouraging wider
use of free toolchains.

~~~
JoachimSchipper
I'm also doing embedded work, but I don't really see - or expect - performance
from the RISC-V cores above similar CPU designs that consume about the same
area/gate-equivalents; did I miss some recent results?

Of course, freely-available and well-supported CPU IP can be very cool!

~~~
joezydeco
Embedded designer here. Came to say RISC-V as well, but not because of
performance. It will be because of price.

A significant hunk of a Cortex-M die is the ARM licensing fee. If we can drop
that? That would be an order of magnitude of savings on my BOM.

~~~
petra
>> A significant hunk of a Cortex-M die is the ARM licensing fee.

Can you give a few numbers/guesetimates for common mcu's ?

~~~
phkahler
>> Can you give a few numbers/guesetimates for common mcu's ?

I suspect part of the ARM deal is an agreement not to disclose price info.

------
patrics123
That would be an interesting question to ask within other specialized
communities and collect the answers in One big Post. Aint nobody got time for
that?

In UX an interesting trend is a flood of Software Tools which help during
Design, evaluation, Research, etc.

Also adaptive UI which is changed due to user attributes and past behaviour
seems to be trendy now (supported by the online marketing field with auto-
optimizing Interfaces which optimize for conversion autonomously, etc.)

~~~
aggie
Any examples you can point to of adaptive UIs?

~~~
patrics123
In my initial comment I mixed up two things into one. Let me clarify.

Adaptive content: What you see is based on your previous usage of the app/site
and not generalized what everyone is looking at. Pretty common... \- Amazon
suggestions, Google results, Facebook stream or even your auto-correct
suggestions of your phone keyboard.

Adaptive Interfaces: Where the actual controls, tools, menus change in favor
of your usage behaviour, or desired behaviour of "users like you".

(Its not quite clear if this actually helps or harms the UX because the UI
could change without the user understanding why an menu item is not available
anymore where it used to be)

\- I am blank on real-world example "software" here \- But web/landingpage
optimization tools like optimizely use predefined rules to change anything on
the UI, (like showing a CTA button or a video, hiding a menu, etc.) where
others like dynamicyield move into the direction of AI-automating that test-
generation and decision making in favor of a single metric (CTR / Conversion /
etc.)

In the end you could argue that every real-world application is only using
"adaptive content" and not actual "adaptive UI".

------
babayega2
Unicef open sourcing RapidPro ( [https://community.rapidpro.io/about-
rapidpro/](https://community.rapidpro.io/about-rapidpro/) )

~~~
secfirstmd
Interesting project. Wonder will it include features in future which allow for
higher security transfer of information. e.g not SMS

~~~
babayega2
Of course. You can easily use Twitter, Telegram, Messenger ...
[http://rapidpro.github.io/rapidpro/](http://rapidpro.github.io/rapidpro/)

------
samlewis
In the embedded/IoT world, I'm fairly excited about two upcoming RTOS's:
mynewt ([http://mynewt.apache.org/](http://mynewt.apache.org/)) and Zephyr
([https://www.zephyrproject.org/](https://www.zephyrproject.org/)).

~~~
petra
What's the big benefit over the mbed OS, besides a bit more portability ?

------
hemezh
The way we educate our kids hasn't changed a lot in centuries. MOOCs are great
but completion rate is a real and yet unsolved problem.

I believe the biggest advancement in the field of education is going to come
with VR. With VR, we can dramatically reduce the cost of "learning while
doing", which should be the only way of learning. With AI, we can provide
highly personalised paths for learners.

VR and AI technologies are finally coming to a point where together they can
provide a breakthrough in industries which are mostly untouched since decades.

~~~
alphydan
What about the kids who won't put on the VR headset because they prefer to
snap-chat, chat, youtube, waste time, do social posturing?

I think, for middle school, it's easy to underestimate how much of education
is not actual content. How do you deliver education that targets the teenage
anger / passivity / disappointment / and emotional roller coaster?

~~~
andai
This is probably my resentment speaking, but I resonate with this Paul Graham
essay about school years being miserable primarily due to school, not puberty.

[http://www.paulgraham.com/nerds.html](http://www.paulgraham.com/nerds.html)

------
moron4hire
I've noticed that the quality of conversation on VR has gotten a lot better.
Used to be you'd go to a meetup and all you could get out of anyone was either
parroting some urban myths about the porn industry driving technological
change or looking for tech support on getting Unity set up. People are now
asking themselves some really hard questions, like how do we design
applications that adapt to both VR and non-VR use (there is an argument to be
made that you can't meaningfully do so, but there is another argument to be
made that you shouldn't stop your users from trying, as they tend to surprise
you), or maybe the game development industry isn't the best model to emulate.

------
aniijbod
The progress towards indistinguishable-from-reality realism in graphics
[https://www.youtube.com/watch?v=vo5ztSsA_zk](https://www.youtube.com/watch?v=vo5ztSsA_zk)

~~~
nojvek
The facial expressions were pretty mind blowing. I sometimes think with VR and
graphics getting so good, will some gamers actually spend more time in virtual
worlds than the real world

~~~
SerLava
In 15 years maybe the bandwidth will be there, and we can have full-face VR
headsets that also face track and transmit expressions to other people in
virtual worlds.

The allure is there - be convincingly you, but also look however you want to
look. That would get eaten up by a lot of people.

------
DanBC
1) Infused ketamine as a treatment for major depression and suicidal thinking

2) More understanding of the "bio psycho social" model of mental illness, with
better coordination across different agencies to prevent suicide.

------
lngnmn
That in the past people used to deceive (delude) themselves and other fools
around them with theology, speculations and metaphysics, today they do the
same with statistics, probability and abstract models.

~~~
michaelmior
It's all a matter of perspective. One could also claim that those who don't
acknowledge God are deluding themselves with a lack of theology. I'm not
trying to start a debate, but dismissing the beliefs of billions as simply
delusion/deception is painting with broad strokes.

~~~
jamesrcole
Which god? The argument you make could also be made for all people following
all the religions aside from the one whose god actually exists. The religions
can't all be right, if in fact even one is right.

So sheer quantity of believers doesn't work for making a point.

~~~
macns
I have used an argument like this and the answer I got(which left me
dumbfounded) was like "Everyone believes in some sort of divine power or
entity or whatever and the other religions just got their wrong."

They're basically saying that sheer quantity of believers in anything proves
that _their_ God exists!

~~~
michaelmior
The statement as quoted does not make the argument you claim it does. Perhaps
the person who were speaking with elaborated in order to make that point
though.

Coming from a Christian perspective however, I would agree in general people
have evidence to believe in God. I don't intend the quote from the Bible below
to serve as an sort of evidence. This would not be a logical line of reasoning
for someone who does not believe in the truth of the Bible. However, it may
serve to further clarify my position.

"For since the creation of the world God’s invisible qualities—his eternal
power and divine nature—have been clearly seen, being understood from what has
been made, so that people are without excuse." \-- Romans 1:20

------
chris_mahan
I don't have just one field.

In programming in companies: realization that internal customers not having
choice of internal IT providers hurts IT because it reduces IT's need to
deliver valuable solutions effectively.

In leadership: management structure is a framework to enforce standardization
and generally doesn't adapt well to change, even with the latest management
silver bullets (lean, Agile, flat-orgs, etc)

Also in leadership: profound changes are occuring in society and geographies
no longer define cultures.

In commercial writing: it's still early, and this takes time, but the concept
of the "book" and how it's created is changing. Technologies that allow
writers, editors, and beta readers to work on the manuscript simultaneously
are increasing the velocity of change.

In art in general: someone else here mentioned music creation and payment is
enabling entrants to sustain themselves in niche markets. This is happening in
nearly all art forms, not just music. As electronic transfer fidelity
increases, more art can be digitized, monetized. Look for more politicized,
more global-reach art.

All these things stem from a greater understanding of the world and of human
beings, starting with ourselves. It's important to realize each human being is
a highly complex system and that generalizations about groups of humans are
increasingly being challenged as scientifically unsound.

~~~
chris_mahan
As a writer, tech and the globalization of English are enabling the hiterto
impossible. Still not clearly seen but glimpsed as shadows behind screens,
they either scare the timid or thrill the brave. They are coming.

------
thenomad
My field's VR, so... all of it.

In particular, wireless transmitters for roomscale are really exciting -
seriously, I cannot wait to get rid of the wire-to-head era - as is roomscale
for mobile devices.

The Vive getting additional trackers is also super-cool, as that will enable
some much better forms of locomotion through foot-tracking. It'll take a
little while to take off but I expect the Lighthouse tracking ecosystem to
produce all kinds of cool things.

(Not all in VR, either. Drones plus Lighthouse, for example...)

------
sktrdie
My field of interest is censorship resistant systems. Systems like ZeroNet[1]
are quite fascinating and are quickly becoming popular and used. Essentially
they're decentralized via the bittorrent network. One cool thing that it
brings to the table is the idea of having users modify a website (similar to
how your comment modifies this page) - which is a hard problem in a
decentralized system. They have come up with an interesting way for users to
do this using trusted third-party certifying systems which are still totally
decentralized (because users can switch to others when they see fit).

1\. [https://zeronet.io/](https://zeronet.io/)

~~~
shakna
I'd take ZeroNet with a grain of salt.

I helped with it for a little while, but the main developer was resistant to:

* Using a package manager or bundling dependencies into a compressed form. Dependencies had to be in the same git repo, fully extracted. (A bit of a "code smell")

* Dependencies could take months to get security updates.

* Documentation couldn't be in the git repo.

* Python 3 was "not an option"

Also:

* The main developer has limited experience with the torrent protocol.

It is an interesting project: but it is not a private or secure one.

------
planteen
CubeSats and small satellites are changing the game for spacecraft. Now
scientists can get experiments launched for a few million dollars instead of
campaigning much of their career for a mission costing hundreds of millions.

------
zeptomu
I work in remote sensing and we do e.g. segmentation of satellite imagery.
There are two exciting developments: First, lots of vector data (think
building footprints, road networks, etc.) _and_ (satellite) raster data (e.g.
Sentinel-2) is now available for _free_ , secondly image segmentation using
CNNs works just extremely well. Therefore there are many opportunities to
build all kinds of software, in particular CNN based classifiers and
distributed systems to handle the immense load of new data.

So I can highly recommend the field of remote sensing as there are many
interesting problems to solve.

~~~
consultutah
Where can you get satellite derived building footprints for free?

~~~
zeptomu
There is currently no established hub, that aggregates them, but public
agencies often offer them on some open-data initiatives on a national level,
e.g. Austria has [https://www.data.gv.at/](https://www.data.gv.at/) and
searching for "Gebäude" (building in German) lets you find e.g. the data set
[https://www.data.gv.at/katalog/dataset/ac74b38e-57cd-4c8c-8f...](https://www.data.gv.at/katalog/dataset/ac74b38e-57cd-4c8c-8fea-c397da185fcf)
where you can download building footprints for the state of Tyrol.

The same thing is also done on an international level, e.g. the European Union
provides platforms and also the environmental agencies in the US.

# edit:

Clarification: These footprints are not satellite-derived (that is the goal,
but it doesn't work well enough for many applications, but we probably will
get there ...), but are hand-crafted by people working in city planning. The
point is you can use the data as training data.

------
gtycomb
Enterprise Architecture. What is often an unmanageable bundle of "models",
pictures, documentation (UML etc, or tools or “repositories”) giving way to
concise and precise schema for architecture decision making – a pleasant
outcome of the informal global teamwork surrounding meta-models in DoDAF,
simplifying EA activity to a level that has not been anticipated ...

~~~
contingencies
_UML - I know next to nothing about UML - but what I do know is the language
was invented first and then people came around and tried to give semantics to
the language. Well, in other words what that means is that the language was
invented first and it really didn 't mean anything. And then, later on, people
came around to try to figure out what it meant. Well, that's not the way to
design a specification language. The importance of a specification language is
to specify something precisely, and therefore what you write - the
specification you write - has to have a precise, rigorous meaning._ \- Leslie
Lamport

~~~
gtycomb
UML as a specification language is the right tool in software architecture. I
find it to be very flexible tool, helping software projects or process models.
However _Enterprise_ architecture need to work with the "Business" (consider
COSO, COBIT, ITIL, and why they emerged when UML foundations were already so
strong).

------
jMyles
I live full-time on a school bus with a family.

Flexible solar panels, LED lighting with open source drivers, and the new
generation of DC refrigerators are all incredibly exciting and are allowing us
to experiment with living without grid electricity.

~~~
chairmankaga
Could you share pictures of the arrangements? I assume you gutted it out and
did some clever interior decorating.

~~~
jMyles
Soon. :-)

Building a simple static website and instagram. We'll share pics with HN soon.

We'll also have the bus at PyCon in Portland.

------
andrey_utkin
Hardware manufacturers caring about their drivers in mainline Linux.

~~~
gue5t
omg who

~~~
andrey_utkin
Nvidia, Broadcom, ARM, Mediatek, Samsung... I don't imply they all do perfect
job or open-source everything they sell, but there are noticeable amounts of
code they put into mainline kernel.

------
rayalez
SideFX Houdini 16 is coming out [1], the new version of the most awesome
software for 3D VFX and animation. Super excited about this, it's gonna be
awesome!

Also, I'm really looking forward to the ActivityPub [2] implementation,
that'll do a lot of interesting things for decentralized web.

[1] [https://www.sidefx.com/community/houdini-16-launch-
streaming...](https://www.sidefx.com/community/houdini-16-launch-streaming-on-
february-6/)

[2] [https://www.w3.org/TR/activitypub/](https://www.w3.org/TR/activitypub/)

~~~
robeastham
Looking forward to this too. I'm going to the launch party for Houdini 16 here
in East London tonight. Make sure to check out Fabric Engine too, especially
if you're doing any realtime VR/AR stuff.

Overall, I'm most excited about VR/AR/MR in relation to storytelling and
education and how the two can be combined. Houdini and Houdini Engine for UE4
are definitely are worth a considering as part of your VR/AR development
stack.

------
dotancohen
In general, this is a question that I would ask interviewees (for any
position). And answer other than shock shows that they are keeping abreast in
their field.

~~~
tomjen3
Then I would fail. Not because I don't follow tech news, but because I feel
what is being created now is stuff we should have had decades ago (what
company is even working on flying cars?).

~~~
bhaak
> (what company is even working on flying cars?)

For example [http://www.aeromobil.com/](http://www.aeromobil.com/) or
[http://lilium-aviation.com/](http://lilium-aviation.com/)

I'm personally are rather disappointed that we still don't have a moon colony.
Making that happen is unfortunately not part of my field.

------
tluyben2
The size of embedded electronics we have now. Makes me very excited about the
near future. As a hobby I am excited by the advances in programming language
development; most seemingly tiny and incremental but a lot of long term
research is getting working implementations and that is brilliant. Another
hobby is the robust push for timer perfect emulators of more and more older
systems. But more than anything VR excites me; it is not 'my field' per se (I
plod around clumsily with little demos) but it will be in the future. And it
will never end.

Edit: there is a lot to be excited about these days

------
gigatexal
Sqlserver coming to Linux via docker containers. It's insane and exciting. We
are a sole MS shop and this is exciting because I'm pushing to move us away
from Windows and onto Linux if possible the kicker being we are dedicated to
SQlServer so exciting times ahead. Hopefully Ms doesn't gimp SQLserver on
Linux.

------
dorait
Chatbots with Intelligence. A variety of skill bots that can teach people at
all levels. Made possible by AI engines like api.ai, luis.ai and others.

------
DanielBMarkham
I'm really pumped about this open source tool project I've started which
promises to join Lean Startup/Hypothesis-Driven Development and DevOps. Enter
everything only once, have it available wherever it's needed.

Analysis has always been an area that the tech community has lacked, ever
since it was overdone back in the days of structured programming. It's really
cool to bring back a bit of structured analysis as just another tool in the
DevOps pipeline and join up the information with all the folks that need it.

~~~
dualogy
> which promises to join Lean Startup/Hypothesis-Driven Development and DevOps

Finding this tricky to parse, got a link or repo?

~~~
DanielBMarkham
Still working on the elevator pitch. Unfortunately it's not as obvious as
something like "Facebook for cats!" (Although I think it will be much more
useful)

The general idea is to be able to have informal, unstructured business
conversations, take those conversations and type extremely brief, semi-
structured (tagged) notes, and have those notes "compile" out to various
places throughout the organization where they might be needed. One way to
think of it is Requirements/Use Cases/User Stories without the rigor. (Or
rather without the rigor and onerous BS folks constantly seem to be always
adding around them)

Here's the repo. There's also a PDF with details of the tagging language I can
send if you're interested. Ping me.

[https://github.com/DanielBMarkham/easyam](https://github.com/DanielBMarkham/easyam)

------
coinidons
In Bioinformatics/DNA sequencing I'm probably most excited by Illumina's push
toward a 100USD genome.

Their current scale-up of instruments I think means that they're looking to
aggressively push into diagnostic applications.

The lack of competition is unfortunate however.

~~~
roye
Long reads from PacBio, Oxford Nanopore, and 10x are also exciting. This new
tech coupled with HiC for scaffolding and single cell sequecing brings up the
possibility of complete knowledge of your genome, the collection of strains in
a metagenome, or all the types of cells in a tissue/cancer sample.

------
samuelbrin
"field" would be a strong word as it's more of a diy hobby thing, but in the
world of FPV drones I'm excited for flight controllers with integrated 4-in-1
ESCs (electronic speed controllers). Wouldn't say it changed the game but
makes it so much easier to build these quadcopter and opens up new
possibilities.

------
profalseidol
The growing class consciousness is the most exciting, as well as scariest. We
can build a non-profit driven world (socialism) - or - hate driven world
(fascism). Reading various texts starting with Karl Marx's Das Kapital is
probably the most important learning a person can have at present.

------
SAI_Peregrinus
NewHope and NewHope-Simple Ring-LWE key exchange systems. Post-quantum secure
key exchange with performance (speed/key size) that's actually practical!
There's not much point to having a secure cryptosystem if it's so expensive
you can't use it.

------
joelthelion
Deep learning is a game changer for image processing (that should be fairly
obvious to anyone reading HN). It still requires a lot of expertise to use,
but it's enabling people to do things that were previously extremely difficult
or even impossible to achieve.

~~~
pdimitar
Would you be up to pointing us at open-source tools with which people can do
these things? And a few examples?

~~~
popcorncolonel
Tensorflow. And the artistic style transfer NN, image sharpening NNs,
colorizers, etc.

~~~
pdimitar
Wasn't Tensorflow only usable online, on Google's servers? I am looking for
something that is fully independent and 100% runnable on my machines only.

~~~
mikecb
No. Google has a hosted version of tensorflow, called CloudML, but tensorflow
is an open source project that can be run anywhere.

~~~
homarp
> that can be run anywhere.

as long as you have a Nvidia GPU

~~~
mikecb
GPU just speeds up training, it isn't required.

------
Kevin_S
I'm an accountant working on financial reporting, and I am very excited about
ways to implement automation into financial reporting processes. Only just now
are people using excel proficiently, I can't wait to see what the next big
step is.

Long story short, so many processes I work with are done completely manually,
which is a colossal waste of time. When I started, the person who previously
did my job had about 7 main processes they completed monthly, which took about
60 hours to complete. Those 7 processes take my about 10 hours to do after I
built automated workbooks

The sad thing is that these excel capabilities have been around forever, but
no one understands them.

------
VestingAxis
As someone who works in the semiconductor industry, one of the most exciting
things happening right now is the development and emergence of
persistent/storage class memory (PCM/RRAM/3DXP/NVDIMMs). The implications of a
persistent alternative to DRAM is immense and besides fundamentally changing
compute/memory/networking/storage architectures it will also change
programming models and SW stacks as we know them today. This is a topic I feel
doesn't get enough visibility here, especially given that support for such
technologies has already started getting baked in to Linux and Windows.

~~~
kaeluka
I like to watch presentations of technical talks while I'm doing
cooking/housework. Do you have any link, perhaps on youtube, that gives me an
intro?

~~~
VestingAxis
The presentations from the recently held SNIA PMEM summit would be a great
start for a high level intro: [http://www.snia.org/pm-
summit](http://www.snia.org/pm-summit)
[https://www.youtube.com/user/SNIAVideo/videos](https://www.youtube.com/user/SNIAVideo/videos)

~~~
kaeluka
Thank you very much!

------
ivanceras
Rust, webassembly and the ability to compile rust code to wasm.

------
suhith
Docker, containers are crazy powerful and cool too!

Lots of cool stuff in the space like Kubernetes, Swarm, CoreOS, rkt!!

------
Existenceblinks
Embedded systems - Wireless Sensors Networks, I know it has been there for
long time but IoT would encourage it more. It could enable development of
different kinds of devices as well. Look at camera industry for example. There
should be more types of sensor to be more popular than just the image sensor.
Quadcopter/Drone/AI etc.

In my view, there are still a huge room of applications where wireless and
sensor combined, and we already have web/native platforms. This is so exciting
development!

~~~
petra
>> a huge room of applications where wireless and sensor combined

Can you please expand on this ?

~~~
Existenceblinks
I was thinking of a group or cluster of quadcopter flying together.

There are some nice protocols and topologies in Wireless Sensor Network topic.
While devices' sensors collecting environment data they can communicate with
each other in several ways (e.g. ad-hoc, hierarchy) and command each one to
behave in different ways (e.g. quadcopters maneuver in whatever beneficial
pattern)

Some more ideas on that flying object example, it could calculate overall
battery usage and balance it all over cluster by wireless charging on the fly.

Underwater devices or robots would be more interesting.

------
genieyclo
Amazon Polly wrt text-to-speech (much cheaper than Ivona and maybe better over
the long-term)

~~~
hiddencost
Amazon Polly is ivona. Amazon bought them years ago and they do all of
Amazon's TTS.

~~~
genieyclo
Yes I knnow, again see the qualifiers...

------
qiv
I am a physicist in biology (so take these with a grain of salt) and CRISPR is
arguably the most exciting development there. This technique allows to edit
DNA based on guiding RNAs which can be readily synthesized in contrast to DNA
binding proteins required for targeted editing before. What's more the same
technology can be used to adjust gene regulation too. These techniques are not
only giving basic research a big boost, but also making many new treatments
possible.

Another hot topic are organoid bodies and organs-on-a-chip. These are
experimental systems where stem cells are turned to grow into structures
similar to embryos or organs that allow the study of development and
facilitate drug testing etc.

Thirdly, advances in sequencing made it possible to study what kind of
bacteria live symbiotically within and on us. The composition of this so
called microbiome seems to widely affect body and mind.

Finally, in my personal field, the simulation of how "simple" cells build
complex structures and solve difficult tasks, the most exciting development is
GPGPU :-)

~~~
pdm55
What about synthetic biology projects such as Yeast 2.0, aiming "to build the
world’s first synthetic eukaryotic genome".

~~~
qiv
I also thought about it, but I do not see people flocking to synthetic
biology. I have the feeling they are further from big breakthroughs, but this
might just be my environment ...

------
iLoch
I'm a web developer. We've picked up Microsoft Orleans for a large scale data
analysis platform we're building. Realizing the power of an actor model on a
mature platform like .NET has been a real treat. So many nasty problems go
away: threading, messaging queues, job queues, caching, general scaling.

------
AshleysBrain
I think consumer software has so much to gain by moving to front-end web tech,
we bet our next product on it: [https://www.scirra.com/blog/184/a-first-look-
at-construct-3](https://www.scirra.com/blog/184/a-first-look-at-construct-3)

------
neltnerb
I can't speak for everyone in my field (chemical manufacture and catalyst
development), but I wrote about some of what I think are the current coolest
new developments in chemistry and materials science as it relates to machine
learning. [1]

In summary, the use of machine learning can help us develop better
representations of chemical reactions, catalyst behavior, and we can now use
adaptive learning to create closed-loop systems to identify, carry out, and
optimize chemical processes to reduce environmental impact, reduce energy
usage, and decrease costs.

The state of the art isn't quite there, but I see no major conceptual barriers
left -- just a matter of implementing it.

[1] [http://www.brianneltner.com/machine-
learning/](http://www.brianneltner.com/machine-learning/)

------
themihai
WASM looks like the most exciting development for the web.

------
pyvpx
P4 language. Truly defining networks via software is currently and will
continue to be amazing.

~~~
virtuallynathan
I was going to jump on here and say that. P4 combined with P4 target ASICs
(like Barefoot Networks) will be a really awesome change to networking.

For some details, see:

[https://vimeo.com/200192012](https://vimeo.com/200192012)

or

[https://www.facebook.com/Engineering/videos/1015489039350720...](https://www.facebook.com/Engineering/videos/10154890393507200/)

------
chrisguitarguy
Advertising. Definitely first party data for targeting. An advertiser takes
some data from its CRM, sends it to the big social sites and google, and then
uses the list to target those folks specifically or create look-a-likes.
Actual cross device targeting (because people are logged in), extremely
personalized and relevant.

This is coupled with a move away from cookies[0].

0\. [https://adwords.googleblog.com/2017/01/making-youtube-
better...](https://adwords.googleblog.com/2017/01/making-youtube-better-in-
mobile-cross.html)

~~~
dcw303
If the CRM has your purchase history, does this solve the problem of targeting
products on users who have already purchased it?

Because I am _really_ sick of Google serving me ads for stuff I recently
searched for and subsequently bought.

~~~
chrisguitarguy
That's advertisers themselves doing a bad job of retargeting. First-party data
is probably not going to change that. In fact, it may mean you'll just get
those ads across all your devices.

------
jchassoul
finally been able to play starcraft brood war again! now as trainers of
predictive models and coach of machines.

------
michakirschbaum
As a web developer, I'm probably most excited about Phoenix.

------
pnut
Perl6

~~~
vgy7ujm
Perl in general. Lots of innovation going on.

~~~
peteretep
Is p5-mop core yet?

------
rodolphoarruda
Predictive analysis done via Learning Management Systems (LMS) to identify
students at risk of dropping courses at Universities. Student retention is a
big topic now because it really impacts directly on school's revenue stream
and financial health. The big hope is with AI to be able to track how students
interact with peers, with teachers and with instructional content, and then
cluster students by their evasion risk.

------
deepnotderp
Generative adversarial networks in Deep learning.

Basically, it pits two networks in a "duel" and one of them is a generator
network that learns to make images.

------
icco
Google publishing the SRE book.

------
hacksOfSumit
autonomous cars & swarm control

~~~
nojvek
Not my field but 3D reconstruction and vision to scene graph is amazing. From
a single camera, being able to create a video game version of it opens a ton
of possibilities. I predict a video game version of all the roads, lakes,
buildings of our real world.

This will change real estate websites as well. I can just query for houses
with X visual features

------
dorianm
[https://crystal-lang.org](https://crystal-lang.org)

A fast compiled ruby-like programming language.

~~~
i336_
I'm curious why this has been voted down.

~~~
dorianm
Probably because it's not VC-fueled hyper hyped. And doesn't fit the current
mainstream pattern of "world changing technologies", etc.

~~~
i336_
I see. I've seen a couple people recommend it now, I'll have to give it a look
at some point.

I've heard it consumes a lot of memory which may be a problem though (this
laptop only has 2GB RAM).

------
ainiriand
Another javascript framework. No, seriously, my field is starting to unravel
the secrets of artificial intelligence. A lot of ethic conundrums are going to
be started by those advancements.

------
husamia
I am a biomedical researcher working in field of genomics. I spend majority of
my time curating literature. I think it could be automated. We need an AI
algorithm for this area.

------
signa11
for networking, imho, it would be a combination of sdn+dpdk making it feasible
to use vanilla x86 boxes for a wide variety of tasks, where you would have
earlier required 'purpose built' silicon etc.

------
uranian
Most annoying is the need to write in ES6/Babel today, and all these js
hipsters that really believe this is the future of web development. I totally
hate Babel with it's dozens of Webpack patches/plugins to make it work. Oh,
and don't forget your (Airbnb style) linter if you want to be politically
correct.

No one needs Babel to write stellar code IMHO. Unfortunately it is not about
the quality of the code you write, it is about being politically correct. This
whole ES6/ES7 thing is much based on what Coffeescript, Livescript, etc..
already did much better more than 5 years ago. And I dare to guess that most
of the Babel proponents don't even realise it's just a transpiler that they
will need till the end of the projects live.

note: I expect serious down votes as opposing Babel is almost a serious crime
nowadays and proves my unlimited stupidity.

No, web development is not really exiting nowadays, it is more terrifying,
where the hype goes tomorrow? Maybe soon I will be forced to write in MS
Typescript if I want to be taken seriously. Same counts for Redux because Flux
is so 2014.. you must be very brave not using Redux! I can go on and on, way
too many examples..

Finding a web developer job now is particularly about complying to made up
standards that become more complex every day. And I've seen quit some horrible
code bases that perfectly comply! It's a very sad reality.

~~~
pitaj
Let's establish a couple things right off the start:

1\. You can write great applications without the latest language features 2\.
The latest language features do make development easier

Babel is necessary for #2 if you don't have control over the browser which
your users use to access your site. If you don't want to transpile, don't.
It's as simple as that. However, the future of JS is the future of web
development, that is indisputable. Using Babel allows you to stay closer to
the future and/or use these great new language features.

You also brought up TypeScript.

3\. Types make development much easier

TypeScript is a combination of types and a transpiler for the ability to use
the latest ES features. Types are great, providing:

\- Better self-documenting code \- More safety \- IDE interop to provide
completion, as seen in VS Code

> note: I expect serious down votes as opposing Babel is almost a serious
> crime nowadays and proves my unlimited stupidity.

From the HN Guidelines: "Please don't bait other users by inviting them to
downvote you or proclaim that you expect to get downvoted."

> Maybe soon I will be forced to write in MS Typescript if I want to be taken
> seriously.

Many would say that someone should be forced to write in a typed language _in
general_ in order to be taken seriously.

~~~
uranian
typical.. I'm very glad you're not the person 'establishing things' in our
team.

> If you don't want to transpile, don't. It's as simple as that.

Are you kidding? Please tell me your estimation of how many developers write
in ES6/ES7 without using Babel or other transpiler???

You don't really need to tell me what Types are about, I have a long standing
C/C++ background. And I really don't need Typescript. I use dynamic type
checking based on ES3, does the job flawlessly already for years. Very rare
for me to have a type related bug. I'm always wary of people that preach
Typescript; what code do they write to get in so much trouble with types?

> Many would say that someone should be forced to write in a typed language in
> general in order to be taken seriously.

omg.. 'forced', this is bad.

I'm only looking forward to webassembly, that will be the real game changer
and the end of JS as we know it.

~~~
pitaj
> > If you don't want to transpile, don't. It's as simple as that.

> Are you kidding? Please tell me your estimation of how many developers write
> in ES6/ES7 without using Babel or other transpiler???

I was implying that you simply don't use ES6.

------
insulanian
Rise of Elixir and Rust.

------
zump
Why hasn't there been any innovation in the area of sleep?

~~~
yellow_viper
The only thing I can think of is those sleep timer apps, I think the main
reason they've not took off is because they require you to put your phone
under your sheet every night which you can forget to do and is a bit of a
pain. They're also completely thrown off if someone else is in bed with you.

There have been a few quickstarters which claim to reduce the amount of sleep
you need but they've all turned out to be nonsense AFAIK.

I agree it's weird that we have nothing when we spend 1/3rd of our lives
asleep.

------
simooooo
.net core

cross platform, open source, very fast

