Hacker News new | past | comments | ask | show | jobs | submit login
NeurIPS 2023: Expo Day (mlcontests.com)
70 points by hcarlens 5 months ago | hide | past | favorite | 12 comments



I attended several talks hosted by financial institutions, because I also in the industry. Unfortunately, I found that their presentations were somewhat high-level and abstract. It seemed as though the primary purpose of these talks was recruitment, rather than sharing their valuable expertise or experience, which I could easily expect given the industry's tendency to be secretive.

On the positive side, I also attended a presentation by Mathworks, which I found quite informative and easy to digest.


I work in FS as well, although I am on the digging ditches side rather than research. My take is that there hasn't been much real value created by "new wave" AI so far, and what there is is very very dull stuff like enabling workflows. There are a lot of demos that don't really work and there is a lot of cynicism due to

a) Crypto - everyone got bombed by fintechs promising the earth for the last 5 years and everyone is bored and angry.

b) Watson - flat out lying to every Cxx in the industry for 10 years straight has built a certain reticence to commit investment needs to generate the numbers for the bonus's to projects that look too good to be true.

I was going to write "I expect interesting projects to emerge in 2024" but I'm not 100%, there are a lot of problems at the nuts and bolts level of using GenAI , model cost.. but also latency, reliability, manageability (prompts soon get out of hand), evaluation. There's a lot of noise, most of it well intentioned, but naive results that cause a flap and then disintegrate under inspection are not helpful. Add in concerns about model ownership, copyright, IP disclosure (from using others models, but also from your model spilling its guts under pressure, or from distillation), indemnity and liability and it might be quite a while before we realise the value from this tech in FS.

I feel that we are stumbling about in the dark.


Positron AI (inference accelerator) is a new one for me, I hadn't heard of it and it looks like they down have a website. Anyone know more about it?


I had a chance to chat to them again today, and wrote some more details here: https://mlcontests.com/neurips-2023/tutorials/#exhibit-hall

Also as the other comment mentioned, https://positron.ai seems to be live now.


Thanks! It looks like the ASIC inference space (if we can it that) is getting more popular. There is also https://www.etched.ai/ that I recently saw.

I didn't follow asic mining during the bitcoin bubble but I have the impression it was the way to go for mining. I don't see why that wouldn't be true for inference, a long as one is ok being limited in flexibility and wed to a particular architecture.



Thanks, it was down when I tried earlier


AutoGluon looks useful.


I wonder if it can handle 3D MRI images...probably not. :(


that's a horrible idea. careless use of machine learning in medicine is one of the few legitimate "AI safety" cases worthy of concern. ML is far too unreliable and doctors too illiterate for medical imagery diagnostics to be safe without an extremely careful approach.

unfortunately, the cat's out of the bag and what is currently in use in hospitals around the world is far scarier than misuse of diagnostics. GE has irresponsibly deployed a ML based reconstruction method [1], which freed from the onerous constraint of having to be correct, makes much crisper images than classical methods. classical compressed sensing reconstruction methods come with a mathematical guarantee on the fidelity of the reconstruction. GE's approach does not, it's just a supervised method. kid stuff. the FDA, for their part, is also illiterate and rubber-stamped the method after looking at their paper where 9/10 radiologists preferred the ML reconstructed images.

would be remiss to not point out that Ajil Jalal et al. are doing great work on reconstruction guarantees for compressed sensing with learned priors

[1] https://www.gehealthcare.com/products/magnetic-resonance-ima...


> ML is far too unreliable and doctors too illiterate for medical imagery diagnostics to be safe without an extremely careful approach.

"Safe" isn't always the only objective. In some cases access to "unsafe" medical AI is better than nothing.

Your mindset is shaped by living in a wealthy country. Do you know much about the medical systems of countries with $1000 GDP per capita? Most poor people in these countries don't have access to a doctor. The wait times in public are so long and cumbersome (and still cost an arm and a leg) that they don't even bother going. Then they die. It's not the country's fault, that's just the reality of poverty, there aren't enough trained doctors.

The only honest comparison therefore is not always "doctor vs AI" it's "nothing vs AI".


whoah there cowboy. Im not in medicine. I'm in research.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: