I was just working on some legacy stuff last week. Only worked in IE11, in compatibility mode. I had to use vanilla JS to get some data out of a huge form.
Yeah, document.querySelector('#app') doesn't work. I found out, "It only works on modern browsers" which IE11 doesn't qualify as, which is kind of funny to me. I ended up using document.getElementById('app') instead though, which did work.
"Legacy software man, it's a helluva drug!"
Tables are great. I believe that all these grid systems will end up reinventing tables 100% anew.
I propose that they add new elements to HTML instead, <grid> <gr> <gd> etc. They will work 100% exactly like tables, but semantic people won't feel guilty using them.
As the maxim goes, "The best way to learn is to teach". My vision here is that for any new topic a student learns, they (or the instructor) would be able to instantiate an AI agent with relevant preliminary knowledge, for the student to practice on. The student would try to teach the agent facts and/or how to perform basic tasks, and the agent, with some basic metacognition would be able to query the student regarding any unclear or conflicting points.
It definitely won't be anywhere near Turing Test level in 5 years, but I believe that by then we'll have something useful. And beyond that, I think there's real potential here, both for revolutionising education, and further down the line in terms of AGI.
This is slightly tangential, but this article from a few days ago strengthened my belief that we're getting closer -
Ones with very non-human in/capability profiles. How to handle expectation management, especially with younger students, is a challenge. Visibly-malfunctioning cartoonish Sparky, the flaky robot dog?... If it misunderstands or forgets what you said, or is emotionally blind, well that's no surprise, it's broken.
I see that is almost possible now.
I'm not asking for anything magic, just the glue that sticks all these bits together.
But in that time we will have probably discovered several currently-unappreciated, biologically relevant biochemical mechanisms which we can’t efficiently probe like this. And also it will be considered next to useless because it doesn’t work on single-cell samples. :)
Something tells me this isn't going to be the next standard in web apps.
Core idea is to start from a human-authored formal domain model (sample models: https://github.com/cjheath/activefacts-examples/blob/master/... ) and generate:
* Database schemas - warehouse, OLAP, and (planned) the ETL script between them.
* Application models (for ORM etc) - the rails one is implemented.
* Code generation for serialization / deserialization from the shared domain model.
* (planned) automatic query extraction from type-safe templates (you write your template in terms of the domain model, and it gets automatically compiled into a set of database queries which supply the data it needs).
It's hard to learn, which is a hefty up-front price to pay, but it neatly avoids a ton of work 12+ months down the track.
I'm not sure the DOM is the problem.
I suppose it's more that DOM+CSS has only recently really started to have broad support for more UI-oriented layout. Things like "make these N boxes all the same width, even if N changes" :)
Maybe I like text more than the average user, though.
Looking at my current list of open apps on this machine, it's a bit of a mixed bag:
- iMessage (native)
- iTerm2 (native)
- Fusion 360 (uhhh something weird, dunno)
- Spotify (Electron)
- XCode (native)
- Emacs (native-ish, maybe GTK?)
- Thunderbird (native-ish, I think it's still laid out with XUL)
- Slack (Electron)
- Keybase (Electron)
One thing to note: most of the time I am in the city, but have a second home in a rural area. When I'm out there on a spotty tethered connection, all of the above Electron apps have to get closed because they lose their minds when there's a spotty connection. Slack in particular likes to steal focus/pop up over everything when it is in a loop trying to reconnect. Spotify gets into a weird state where the UI gets unresponsive. Keybase doesn't really complain much.
(And I wish Chrome used those native Mac UI APIs as well as Mitsuharu Emacs does.)
In 10 years I think we’ll be able to generate entire movies programmatically.
Award winning, agreed, no. Good enough for a special effects blockbuster or pr0n or stereotypical teen comedy or date night romcom, probably good enough.
Ironically the cheapest part of a movie right now seems to be the script / storyline, and some kind of Amazon Turk with more flexible morality might write better than we get now versions of special effects demo reels (aka action flicks) or pr0n or teen comedies via the magic of extensive A/B testing and population sampling.
Even if it's in drawing/hentai form and not 'real' people.
Big bucks to be made!
So it exists. Just expensive atm like lab meat.
To write a script you need full general AI smarter than humans. So not what's being talked about.
A script is tiny. You can write one in a day for a 2 hour movie. It might be crap, but if you push a button and can watch it, you can iterate quickly. This is not within 10 years though.
Phishing emails are incredibly effective. Thats why they continue to exist. Just because you don't fall for them doesn't mean millions of people every day don't. These are the largest source of stolen account credentials and credit cards.
Hard to understate how big this will be for maintaining growth in wireless capacity. I think there's a timeline where we ditch copper coax and even buried fiber in most infrastructure.
The scary part? I work in the health care industry. This means a lot of the heavy forms processing that goes on will soon be completely automated by robots and without any human interaction or decision making. The future is cold and calculating; without any empathy or consideration for the patient - only the bottom line for the provider is what matters.
Nearly every day I get the statistics of how many jobs our team's bots are replacing. In one instance, we had several bots that effectively replaced over 1,000 FTE's and saved the company close to $3 million. We have over 600 bots right now running which is in the top 1% of all companies in the country and they're looking to expand that number even more.
Nearly every day I feel the moral weight of what I'm doing and it gives me pause.
> What RPA software are you using?
Combination of Pega, UIPath and one other 3rd party vendor
> What tasks are being automated?
Right now anything that can be. UHG has acquired a lot of companies over the last 8-10 years so integrating a lot of legacy technology has been a major pain point. UHG feels like they need to be more nimble and feel like they're going to start losing market share because of this lack of being able to leverage more recent technology. Because of this, a lot of the work has been integrating these companies technology, automating a lot of data entry, filling in forms, reading documents and storing them in databases, etc. We basically have a standing offer to any business unit if they can find a way to streamline their process, we'll do it.
This doesn't add up. You're saying each full time employee earns on average $1k/year?
It also is essentially lost on almost all management that coding is almost always a creative endeavor. After all if the thing you’re building already exists you could just buy it. Creative works are extremely hard to estimate in any meaningful way.
To give an example: I can't say it takes 1 year to build a widget so this widget will be built in a year. It matters that Max is working on a piece of code but will be out in May and his work won't be sufficiently documented so Shannon won't be able to make enough progress on this particular bug pushing out the project two weeks. The particulars of a particular project matter all the way down.
More often than not, progress is measured in time wasted rather than value created.
Not only is it very difficult to almost impossible to estimate work items on a time scale larger than a few weeks but this also gives rise to the wrong assumption that time spent equals progress made, not to speak of the wrong incentives a time-based approach to work estimation brings about.
Hence, if you want to assess work in progress and risk you need to define what the terms "progress" and "risk" actually mean for your projects first.
In 15 to 20 years every smartphone (or perhaps pair of AR glasses) will have one of these.
I'm a total layperson with cameras and had an abiding sense that there is a fairly hard limitation on what's possible within a smartphone-type housing (because the sensor is limited by the amount of light). I'd love to hear more about how this perceived physical barrier is being overcome!
For example, resolutions in angle, time, color, and intensity can be traded, inhomogeneously, and augmented with computation. It's not that simple hard limits, say Rayleigh's diffraction limit on resolution, are wrong exactly. But they do seem to get naively misapplied as system limitations. Super-resolution microscopy techniques, for example, work around it.
The entire backside of your smartphone could be a sensor (think millions of pinhole cameras). That would give you plenty of light.
Correcting for your fingers covering half the lens shouldn’t be too hard, once you can build that. Keeping the thing smartphone sized, and not turning it into a heater could be challenging.
Software-only, one could grab 50 frames in half a second, and do magic to integrate the results into a photo (motion estimation could, if done near-perfectly for every pixel, tell you what pixel values to average for every pixel)
Modern smartphones don’t take photos; they use image sequences to produce images that look like photos.
You said possible. Not actually realized :-)
Please tell me that I am wrong. I used to be a lot more optomistic about HCIT than I seem to be right now...
These are materials with engineered structures at usually the nano or micro scale that have unique/unusual properties. Things like better antennas, imaging devices, or even materials that can perform computations.
As the manufacturing processes develop more I think we will start seeing them more widespread. Defence industries are in particular interested in this at the moment but the potentials are much bigger.
In 3-5 years? Apple is rumored to intend both headset and glasses. So I hope for all-day AR, with >1080p resolution, eye tracking, and hand tracking, that Just Works. Enabling 3D GUIs. At least shallow ones - avoiding vergence-accommodation conflict in consumer devices may take additional years.
Within five years there should be multiple AIs that specialize in different types of programming. They will have a combination of a natural language interface and interactive screens.
Most of these will be based on starting with existing template applications and tweaking them to handle special cases. They will manage that by training neural networks on datasets that provide requested tweak descriptions and the resulting code or schema changes. They will have a fallback to manually edit formulas or code when necessary. AIs will also be trained to read API descriptions and write code to access them.
Within 10 years fully general purpose AI will be available that can completely replace programmers even for difficult or novel problems.
One recent example is using PPG on smartwatches like Fitbit and Apple Watch to detect atrial fibrillation.
In 3-5 years I see some more use cases like this being released.
In SW development, automatic code generation will breakthrough.
Working on it now.
Going to call it _Dial-Up_.
- In 5 years you will be able to start and make 90% of the key design decisions for a new general or domain specific programming languages in a couple of days, a process that currently takes many months to years if not decades
- In 10 years it will be impossible for most software engineers to keep their jobs if they refrain from using program synthesis AI's providing "super-autocomplete"
Based on expertise in multiple international projects i think in 5 years from now software will start to solve most important humanity problems.
In 10 years from now it will at last have substantial role in solving them.
These are the main problems that software will help solve. And i can observe a lots of focus on these topics in recent years. Actually since a lots of limits software faced had dissapeared.
This means software will be less used for whacky stuff like ads but for things people care about.
1. The IRS primary, secondary, and cold backup tertiary mainframes will have failed and not have sufficient replacements in place.
2. Library of Congress Subject Headings will be incomprehesible due to controversies over how they are not sufficiently "woke" and library subject cataloguing will have to reach back to revitalize the work of Minnie Earl Sears to try to maintain order.
3. There will be an Internet. There will be the FAANG properties. They won't overlap anymore.