Hacker Newsnew | comments | show | ask | jobs | submit | joelthelion's comments login

Could be interesting to plug this kind of generator into American Fuzzy Lop.

So how does it work?

Basically creates a pattern from your letters and sequences it using WebAudio. If you look at the Network tab, you'll see the individual mp3 files for each letter being downloaded.

Surround your stuff with parens

If the Amyloid hypothesis is wrong, why does this drug[1] seem to work so well?

I think everybody should humble themselves a bit and recognize that we don't fully understand the mechanisms of Alzheimer's yet. It's a complicated disease, and a lot of the techniques we use to understand it are still in development.

[1]: http://media.biogen.com/press-release/corporate/biogen-idec-...

(Full disclosure: I have been involved in MRI and PET imaging arms of industry-funded trials such as this one).

First off, there's no question that Amyloid plays a role in the development of AD.

It is however becoming increasingly clear that it does not play the primary causative role. Its exact role is currently unclear: its likely both a secondary neurotoxic agent and an epiphenomenon. It's also non-specific (ie. Amyloid deposition is seen in several other disease states as well as in many cognitively normal controls).

There are certainly other agents and mechanisms that are both more important in the development of AD and arise earlier in the time course of the disease.

This particular drug has a modest effect (From this study - it slows down disease progression, at least in the first year). The several dozen other anti-amyloid agents that have been trialled, at billions of dollars of expense, have either had little or no effect or demonstrated a modest effect in the first 12-24 months treatment before becoming ineffective again.

Its also worth noting that the manufacturers have a very limited understanding of the mechanism(s) of action of the drug. So, there's no guarantee that any effect is actually attributable to the reduced amyloid deposition, and not some secondary process.

What do you work on? I work on image processing algorithms for structural MRI.

I'm a neuroradiologist, so I mostly interpret images (MR, CT and PET mostly).

I'm involved in various research studies where I am involved in study design, sequence selection, image interpretation, clinical correlation etc.

Most of the techie stuff gets done by physicists and some of my collaborators in other areas of neuroscience but I do enjoy doing some of my own (rather rudimentary) pre- and post-processing of MRI data.

What sort of image processing do you program for? VBM?

I work for an imaging CRO, so I primarily work on imaging endpoints for clinical trials: hippocampus volume and atrophy, etc.

I've developed multi-atlas methods for segmentation, boundary shift integral (BSI) and Jacobian (TBM) for longitudinal atrophy computation, etc.

What kind of processing do you do? What tools do you use?

How sustainable is putting a large lithium battery in every home?


He's not saying DBus as a concept is bad, just that the implementation sucks. If he's right, that's something that can be fixed relatively easily (compared to a major architectural change).


no, both concept and implementation is bad. This thing shovels and computes a LOT of data per call, you cant optimize that out because its what authors of kdbus intended. Its typical 'look at all that CPU we have now, lets use it' mentality that keeps Wirth's law true.


He does seem to be saying that kdbus is pointless since those fixes would be in userspace where it's spending most of its time. Or am I misunderstanding?


The point of kdbus is to do dbus in kernel space. The one spending lots of time in userspace overhead (not doing actually useful work) is regular dbus.

Linus is saying that kdbus is pointless because its performance gains don't come from being in-kernel, they come from the code not being a complete shit-show, and he believes the same performance should be achievable by fixing regular userspace dbus.


Well, the code review of kdbus in the rest of that thread would argue that its code is a shit-show, it just happens to be a faster shit-show than userspace dbus.


I agree, the OP title is a bit misleading.

I use dbus every day on my laptop and it works totally fine - I don't notice its existence. I'm all for improvements, but I'm quite patient and quite happy to wait this one out.


> I use dbus every day on my laptop and it works totally fine

My laptop appears to run fine despite dbus, if the hundreds of daily 'Failed to connect to socket' messages are any indication.


I made a silly desktop app some time ago and it used DBUS to get notifications from NetworkMonitor when the system went online/offline. Nothing too fancy, very few lines of code.

When I was implementing that, I managed to get several segfaults from my Python code. All together seemed a little bit fragile to me :(

That was 5 years ago, things are probably better now (to be fair, I don't know what was causing the crashes; NM was 0.8 back then), but when I read Linus comments I can help to think he's probably right.

This is an anecdote and all, but my point is that until I had to use it... DBUS was pretty good and was working fine :)


I'm not a software engineer or gifted hacker.

Given that, I was able to use dbus to come up with a quick solution for a department feature in an afternoon using Pidgin's libpurple dbus bindings through purple-remote to create a presence tracker and simple announcement bot with bash.

A little digging into Pidgin's DBUS Howto gave me all the documentation that I needed. It just took a lunch hour to have something functional up and running without any real in depth programming knowledge required or needed. In an afternoon, I'd hacked up an RSS feed that showed everyone in the department's current presence as reported by their IM status and that feed could then be consumed by the required apps.

I don't know about dbus' other merits or flaws, but it did help me solve a specific problem quickly. This was a dead simple use case though.

At the time, I assumed that people more knowledgeable than me could do a hell of a lot more with it.


Two years ago I had to use DBus to connect to a Bluetooth device from Java.

In the end, after a few minutes of good work the connection between the adapter and the peripheral would timeout (or someone would go out of range), the whole thing would stall and we'd never get another message from BlueZ, be able to connect to any device ever again. It was without doubt the worst development experience I ever had.

Worse, you couldn't reload the dbus library and start over because then Java would scream and crash. So we had to restart the JVM, and BlueZ, etc.

I'm sure I was doing something wrong, but if I can't get it to work properly in a matter of weeks then it's not just my fault.


Were you using threads? Python really should not crash, but combine it with library imports implemented in c and threads, and you get really subtle race conditions which result in segfaults. I have myself been force to debug why Python segfaulted, and it was a standard library call which internally used threading, and that caused a conflict with a c library that was not threading safe.


Sorry, it wasn't Python. It was a system component that crashed as consequence of my Python code using DBUS.


> When I was implementing that, I managed to get several segfaults from my Python code

Well, it wasn't really DBus' fault then, was it?


I didn't mean to say my Python code was the one crashing with a segfault. Apologies if my comment wasn't clear enough.

I just did a search in my bugzilla account at Red Hat (Fedora distro) but I couldn't find a report for that specific crash. The closest I can find is a report on a crash of notification-daemon (could be related though, as it uses DBUS to advertise a service).

I'm surprised I didn't file a bug report, but there you are.


Maybe your laptop would be much faster if dbus was faster.


Is this related to the massive move to the cloud?


The point is that the library is generic - it can be used for a wide variety of difficult problems.


We don't need two different languages for this.


Oh, yes, we do. We need a language targeted at humans that is machines can process and we need a language targeted at machine processing that is inspectable by humans. Those are two contradicting targets: the latter calls for simplicity, but the former calls for shortcuts[#] to make human's work easier, which adds, not reduces, complexity.

[#] Shortcuts like not quoting keys and omitting braces and commas in hash definition.


Costs are very low. If the owners don't want to maximize profit, the service can very well run like this for a very long time.


You are only considering the cost of infrastructure. What about the basic salary for the people working on the application and monitoring infrastructure, who will pay for that, if not users and not ads and not government?


I use ninja everyday with cmake and it's just perfect - never had a problem with it.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact