> In Neural Signals' implantations of six people, only one had a short lived episode of focal motor seizures and brain swelling leading to temporary weakness on the contralateral side of the body.
> That person was me, Phil Kennedy. And here I am writing this addition to the Wikipedia
Which is why Wikipedia is so western-centric. I could be the world expert in my field, say... coconut picking. I haven't published any papers about picking coconuts, and neither is it a subject that the newspapers want to write about.
If I wrote wikipedia entry about picking coconuts, it would be immediately deleted because personal experience isn't a citable source. I need to tell some other fool about what I do, have him publish it in an article, and only then am I worthy of sharing my knowledge on wikipedia.
I've seen absolute hogwash in wikipedia articles about my field (not really picking coconuts lol) but I can't correct them because I can't find an article where someone else has written about the topic.
The number of people who think themselves world experts in something is truly enormous. Wikipedia needs to be maintainable by people who aren't topic experts. They can't just go around anointing people that sound expert-y to them, as that would introduce far more bias, western and otherwise.
Wikipedia's job isn't to push forward the frontiers of knowledge. It's to summarize the stuff you can look up elsewhere.
> > Wikipedia's job isn't to push forward the frontiers of knowledge. It's to summarize the stuff you can look up elsewhere.
> Citation needed.
The exact policy under discussion in this sub thread (and, to a lesser extent, the other two in Wikipedia’s trio of core content policies) is exactly a statement of this:
To push forward the frontiers of knowledge is literally not the point of an encyclopedia, by definition. Even if it was, how could you see frontier pushing happen with volunteer editors ? With more unsourced/trust me bro/self describes experts editing online? I kind of doubt that
Wikipedia prioritizes verifiability over completeness. Are there downsides to that trade-off? Absolutely. But i would rather that over trusting random people on the internet to know what they are talking about.
Note that Wikipedia has this policy focussed on being a tertiary source; Wikiversity, another Wikimedia project, welcomes original research.
A lot of what people complain about Wikipedia ruling out is in scope for either Wikiversity or Wikibooks; it's not a “we don't want that” issue but a “we have structure, and within that structure there is a better place for that”.
> Yeah, but realistically wikiversity doesn’t have much to show for it.
Because people don’t really want a public wiki to publish original research, they just want to vandalize Wikipedia because of its social impact and use their claims of personal expertise as an excuse to evade Wikipedia’s existing content policies.
No, but I definitely think people would lie about being an expert coconut picker, and I think the number of trolls on the internet who would find it funny to try to get false information into articles vastly outnumbers the number of coconut picker experts who don't have any way to verify their credentials.
Yeah while I don't personally like the extreme over reliance on just secondary sources to the point where it leads to driving out actual experts, I can't imagine how bad it would be with just even less strict sourcing rules in general. Actually I can, that was pretty much wikipedia until the big clean ups that happened at the end of the last decade and I'm glad we are past that... interesting early stage.
In my personal experience, the demographics of America are about 50% employed in a startup, the median education level is PhD, and the most common religion is Judaism.
Personal experience is hogwash. Everyone is convinced their personal experience is an accurate view of how things
really are, and everyone is wrong. I have no idea if the "world's best coconuts pickers first hand experience" is as laughably wrong as my first person experience of American demographics, and I have absolutely no way to find out. Wikipedia standards exist for a reason.
Larry Sanger, who cofounded Wikipedia, left Wikipedia to found a competitor named Citizendium, where experts were supposed to play a greater role than on Wikipedia. It failed.
The Encyclopedia Britannica, which had articles exclusively written by experts, was also outcompeted by Wikipedia.
It's unlikely that Wikipedia is going to change its model to give experts a greater say than they already do, but it would be interesting to see if another expert-curated encyclopedia could eventually compete. Maybe if some incredibly well-funded company like Google or Apple got behind it it could work (though that reminds me of Microsoft's Encarta, which also failed).
Britannica is still running, I use it. It certainly doesn't have as many articles nor eyeballs as Wikipedia, but as I wrote above, it's a damn sight better. This outcome would probably have happened - as the example you've given shows - regardless of whether Britannica (and other existing encyclopaedias) had used a different pricing model, which really shows that they're not in direct competition. Same sector, different consumers.
I might be remembering wrong, but wasn't Encarta mostly offline (or at least, it required an offline installation)? I vaguely remember my family having an Encarta CD among all our CDs in the computer desk drawer; it must have been the late 90s or early 2000s, so I was still pretty young, so I might be remembering wrong. Did Encarta ever try to go fully online? I feel like that's one of the big reasons why Wikipedia was able to grow so fast; being web-based means that anyone can access it with software they already have installed and share links with their friends/family with very little effort. Maybe there was a way to share entries in Encarta, but I have trouble imagining it had as little friction as just sharing a URL.
Reminds me of whoever it was that bootstrapped a "fact" into Wikipedia "truth" by ninja-editing some obscure public figure's wiki page with a fabricated piece of trivia just as they hit the headlines for whatever reason, which made it into a published (if hastily researched) article about the figure. Then when a Wikipedia editor reverted the edit, they re-added it, citing the article as a source.
This is why I check Britannica as much as Wikipedia now, especially for any topic that might be touched by politics, such as history, or even science (or especially science, given how bad it's getting on Wikipedia). The quality difference is remarkable, and is just one more counter to my previous assumption of the internet will make everything better!
There are plenty of sites where anybody can claim they are an expert on anything and expound their expertise. Even here on HN you can get away with claiming you are the world expert in coconut picking, as long as no other coconut pickers turn up in the thread.
Wikipedia follows different standards. Not everything have to be Twitter/Facebook/Reddit/HN.
I agree it's a somewhat arbitrary line, but you can easily work around it by writing a blog post and then citing that. Slightly more work but at least you won't have to deal with the rule nazis. Now if only there was a way to work around the question closers on StackOverflow...
How do you distinguish between genuine personal experience (even assuming personal experience is reliable) and lies invented to promote an ideological narrative? Generally they can't be distinguished. Thus Wikipedia disallows both.
I'm sure they recognize that weakness but also that it's necessary. Wikipedia can't be "The source" of information, that would change its fundamental premise.
Self-published sources are generally not considered reliable: i.e. anyone can't just write a web page or a book and start quoting themselves on Wikipedia. That's clearly objectionable!
Recognized domain experts who are routinely published in respectable journals is one of the main exceptions.
This is absolutely crazy cool. Perhaps modern surgical interventions will only progress one crazy risk taker at a time. Reminds me of Werner Forssman and catheterization! An example where overly cautious ethics killed so many people before a brave self-experimenter saved so many lives.
Certainly a unique experiment, however behind the scenes things were let down by poor experimental procedures, minimal experiment designs, outrageously inaccurate data analysis, and poor recording equipment.
Yup, it's unfortunate that he took this kind of risk without having his t's crossed and i's dotted. The experiment could have been much more informative.
My personal big disappointment was seeing objectively false claims being made at the society for neuroscience conference. It made it clear to me that the work was not being done in good faith.
I'm not sure how much he actually accomplished, though. His experiment ultimately had to end prematurely, and his technology isn't be developed. In fact the mainstream is going in a more non-invasive route that could potentially be about as effective. I love the spirit of it, but honestly it is a crazy risk to take and as a researcher his brain is probably more valuable undamaged than as a research subject.
To undertake such tremendous personal risk to advance the state of brain-machine interfaces for us all - this guy is a hero in my book.
Futurists and the general populace can hand-wave about humanity's glorious digital future all they want - it will not come to pass without heroes like this paving the way.
It does appear that is one of the molecules that will help stimulate production [1] but to what degree I can not tell. This might make for some fun reading on PubMed. I take half of those things they list but have no way to measure it.
I would avoid commenting on the people down-voting you, that typically just makes it worse.
No idea. I really do not have a good way to measure this. I am also quite skeptical of things like IQ tests as a means for measuring such things. I've had some in depth discussions with a friend that is a psychoanalyst about these tests and have read up on them myself. My limited understanding is that they are mostly useful for putting a person into a range but getting exact numbers is not feasible, meaning that the tests giving a specific number is really just an approximation in a range and that also varies by which of the two standard tests is used.
If I could get something that could measure exact interaction response times of synapses then perhaps that might be a starting point. The equipment exists but doubt I could even afford the grey-market gear nor do I have a place to put it.
One might ask why I take these molecules if I can not measure efficacy. For me it's simple. They are relatively cheap and there are enough studied benefits that even if the gains are marginal I will find that an acceptable net gain in the big picture of my overall health.
I got some Lion's Mane capsules from Amazon (though I'm not sure how pure they are; for all I know they crushed up some whitecaps and put those in there). I started with 1/3 the recommended daily dose and started having slight headaches. That was the end of that experiment. :-(
It's BDNF. Lion's mane is just a supplement which promotes that, but there's other more effective ones. And of course the artificial ones are stronger (Noopept, Semax, ketamine).
>Ketamine, an N-methyl-D-aspartate receptor antagonist and putative antidepressant, may increase synaptic plasticity in prefrontal cortex through higher expression of BDNF. Furthermore, ketamine was shown to change resting-state functional connectivity (RSFC) of dorsomedial prefrontal cortex (dmPFC).
That is really fascinating! I wonder whether the lesions observed in chronic, heavy ketamine abuse are related to trophic effects.
A glance at search results sugget BNDF seems tighlty regulated, and linked to lesions in other rare conditions
(This makes me want to buy neuroscience books! Such an interesting field)
Maybe not in this case, however Lion's Mane is objectively helpful and has been shown in the literature to be so. Cultures have been eating it in dishes for centuries... As with all nootropics, it's usually better for those who are older.
I got some Lions Mane capsules, and tried taking them. The recommended dose is 3 capsules per day. I started with 1, but after a couple of days started having slight headaches. I didn't feel like continuing any more.
Imagine what would the repercussions for the field be if the surgery went a little worse since it seems so badly planned from the outset? More restrictive regulation against the practice? All major funding either pulled outright or forever tainted with the stigma arising from this endeavor ?
I don't know what it is about HN that this forum praises seemingly stupid pursuits such as a neurosurgeon choosing to operate on himself without (a) Validating the approach better and (b) Having a backup plan in case things went bad (No person in the US to come and care for him etc) and (c) Using antiquated electronics in the eyes of the very experts he was entrusting with the operation.
The FDA has laid out the ethical groundwork: if a patient has no other options, a safe (sterile) surgery may be performed. It's only because they revoked his approval to use the specific type of implant. Would it really require so much money to demonstrate sterility? Or did Kennedy just not want to have to deal with bureaucracy and barely functional/dying patients and therefore took matters into his own hands? If so I don't blame him; if you can't communicate with your subject it's hard to make sure everything is being done right.
Apart from the clickbait headline, there is an important division between restorative and augmentative applications of BCI. The former, when the patient has already suffered serious neurological injuries (or even paralysis) presents an attractive risk/reward ratio. The latter is dubious, even if only for the operation risks such as the swelling the article notes.
Interesting to ponder where that dividing line resides.
augmentive BCI is literally the only route for humanity to conquer problems like death.
this guy will go down in history as one of the few that paved the way for humanity to evolve beyond the horribly fragile flesh-and-meat sacks that we are today.
Yeah, it's like there's a hook, and then a whole lot of backstory before the hook comes back.
I think authors rely on this too much as a crutch. It can work if the hook makes the reader actually wildly curious as to how that situation came about. But otherwise it comes across as like a turgid interruption.
I also dislike it in television. It's one of JJ Abrams' favorite techniques. Open with some crazy scenario, and then "two weeks previous" comes up on the title screen. He did it all the time in Alias. Although I think it worked in Mission Impossible III.
I feel like it's all about the misguided notion of writers/reporters "adding something" to what happened. They are trying to enrich the raw story with their own style leading to this mess.
The story seems interesting enough, I'm curious what happened to him, what he discovered etc, but I can't take this style of writing.i would like to have a tldr version somewhere of this as I have no intention of putting my entire morning into skimming through every single insignificant detail of the developments.
Or could anyone here just post a tldr summary perchance?
The headline is quite a bit of clickbait. And the frame of the article is aimed at clicks and keeping the reader hooked by creating the impression he had made some monumental mistake.
He had some brain swelling shortly after the surgery. Thats it.
This is the greatest thing about Hacker News in my opinion. Obviously, there are limits on what I know so I depend on people here to help me out. Thank you! That said, I've learned that headlines are often not written by the authors of the article, but the magazine editor to drive clicks.
“Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues.”
Its also a great danger. I didnt aim at giving a summary, i just called out the frame the author created. Something that "fao_" pointed out well when coming to the authors conclusion as well.
Have you tried reading the article? Because he literally gave himself severe brain damage / trauma from the operation:
"""
At first the procedure that Kennedy hired Cervantes to
perform—the implantation of a set of glass-and-gold-wire
electrodes beneath the surface of his own brain—seemed to go
quite well. There wasn’t much bleeding during the surgery. But
his recovery was fraught with problems. Two days in, Kennedy was
sitting on his bed when, all of a sudden, his jaw began to grind
and chatter, and one of his hands began to shake. Powton worried
that the seizure would break Kennedy’s teeth.
His language problems persisted as well. “He wasn’t making
sense anymore,” Powton says. “He kept apologizing, ‘Sorry,
sorry,’ because he couldn’t say anything else.” Kennedy
could still utter syllables and a few scattered words, but he
seemed to have lost the glue that bound them into phrases and
sentences. When Kennedy grabbed a pen and tried to write a
message, it came out as random letters scrawled on a page.
At first Powton had been impressed by what he called Kennedy’s
Indiana Jones approach to science: tromping off to Belize,
breaking the standard rules of research, gambling with his own
mind. Yet now here he was, apparently locked in. “I thought we
had damaged him for life,” Powton says. “I was like, what have
we done?”
"""
"""
Kennedy’s recovery had continued to go poorly: The more effort
he put into talking, the more he seemed to get locked up. And no
one from the US, it became clear, was coming to take the doctor
off Powton and Cervantes’ hands.
"""
From that description:
- Motor control impairment
- Extreme language impairment to the point that subject is unable to write coherently
- Literal fucking seizures
- Possible prefrontal damage
While it was just "postoperative brain swelling" and the brain "can heal". It's unlikely that he made a completely full recovery, indeed later in the article it is alluded that he has permanent motor damage:
"""
When I meet Kennedy there one day in May 2015, [...] Kennedy says
with a slight Irish accent [...] “The retractor pulled on a
branch of the nerve that went to my temporalis muscle. I can’t
lift this eyebrow.” Indeed, I notice that the operation has left
his handsome face with an asymmetric droop.
"""
And likely, prefrontal damage from his inability to refrain from commenting the first thing on his mind[0]:
"""
Kennedy said when we first started watching the video. But now he
deviates from our discussion about evolution to bark orders at the
screen, like a sports fan in front of a TV. “No, don’t do
that, don’t lift it up,” Kennedy says to the pair of hands
operating on his brain. “It shouldn’t go in at that angle,”
he explains to me before turning back to the computer. “Push it
in more than that!” he says. “OK, that’s plenty, that’s
plenty. Don’t push anymore!”
"""
The reporter later refers to his "garbled answer", indicating that he still has language formation problems, and that actually seems well-indicated from the snippets of quotes we see from him.
The original commentator's point still stands. The initial post-operative complications were chalked up to brain swelling in the end and the lasting damage was fairly minor for the procedure being conducted. All mentions of fear from the operation were attributed to the surgeon performing the operation rather than Dr.Kennedy who was undergoing the surgery.
Another thing to take note of is that Kennedy mentioned the permanent damage had occurred "when he was putting the electronics in", which implies it happened during the second operation (the first operation with the seemingly severe symptoms were for the electrodes).
No where in the article does it indicate his mind was ever close to being lost besides the headline. It even took note to say he stayed in a villa during recovery which the surgeon made daily visits to (with the implication being a hospital would have been best for around-the-clock monitoring if his well-being was truly in danger).
Also, it's against Hacker News guidelines to ask whether someone read the article or not:
'Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."'
>While it was just "postoperative brain swelling" and the brain "can heal".
As i said, brain swelling from the surgery. The article first gave me the impression it was because of what was implanted, ie his implant not working. Which wasnt the case, it worked and he collected his dataset with his ancient hardware till it had to be removed again.
I indeed read over the part of the sentence with the facial nerve though. His wording seemed fine though, i have met quite a few people who talk like this. People who go to Belize for brainsurgery are expected to be a bit eccentric.
> Which wasnt the case, it worked and he collected his dataset with his ancient hardware till it had to be removed again.
Unfortunately as others have pointed out, his dataset is pretty worthless and the claims around it have been largely unsubstantiated. From your earlier post,
> And the frame of the article is aimed at clicks and keeping the reader hooked by creating the impression he had made some monumental mistake.
IMHO he had, and if you ask literally any neurosurgeon or anyone working in neuroscience I'm willing to bet that they would say the same.
> his dataset is pretty worthless and the claims around it have been largely unsubstantiated.
The really sad part of it is that with fairly minor changes it could have been a landmark dataset IMO. Having better recording equipment, a lower noise floor environment for data collection (e.g. don't run fluorescent lights while recording), having precise timestamps integrated into the recording, better temporal separation between recording of various phrases, adding simultaneous EMG/EEG, etc would have resolved quite a few data related issues. It's a unique situation (for good reason), but bad data...
From seeing the data, I suspect the electrode placement was in a good enough location that novel things could have been done with a cleaner data source. I guess the major limiting factor is that absolutely no one would want to be involved with the project at that stage. What sort of IRB would approve any of this? Better data would answer some key questions that could generalize to lock-in patients (theoretically) and worst case it would provide a strong indicator that no significant BCI can be made for the task given the electrode placement.
Yes, i got that from your post the first time. Like i said, brainswelling from the surgery.
It read to me as if the article wanted to create another picture. The author not mentioning up front that it was only the probes that got implanted left to me the impression that the speech problems were due to an error with the implant. Which it wasnt.
>Unfortunately as others have pointed out, his dataset is pretty worthless and the claims around it have been largely unsubstantiated.
One said the approach was worthless, because there exist better (less invasive and cheaper) approaches using eye movement tracking apps today. It was also as the reason given why nobody else researched into this. That wasnt however aimed at the dataset. The feedback to the dataset started at
>When Kennedy finally did present the data that he’d gathered from himself ...
Notably
>By taking on the risk himself, by working alone and out-of-pocket, Kennedy managed to create a sui generis record of language in the brain, Chang says: “It’s a very precious set of data, whether or not it will ultimately hold the secret for a speech prosthetic. It’s truly an extraordinary event.”
>IMHO he had, and if you ask literally any neurosurgeon or anyone working in neuroscience I'm willing to bet that they would say the same.
He was 66 and was first left hanging waiting for a willing subject who could still talk to validate earlier results. He then couldnt afford recertification of his invention and was faced with nobody else working on this approach.
You are skipping over the fact that it worked. His lifes work and deep obsession turned out to work. He did it, what more is there to say other then good for him? Even if the dataset would turn out to be without practical implications, you can see at his pondering at the very end, whether to put an implant into the other site, that he would have regretted it deeply to have it sit there and stare at him for the rest of his life.
> His early fears of having damaged Kennedy for life turned out to be unfounded; the language loss that left his patient briefly locked in was just a symptom of postoperative brain swelling.
While I am glad that there was minimal harm done here, I am perplexed as to why a Neurosurgeon would risk their decades of training and career for such a high-risk low reward surgery. This could have been a terrible ending.
We should be weary about tampering with things we do not completely comprehend.
One of the two winners of the Nobel Prize in medicine from 2005 (Barry Marshall) gave himself ulcers just to prove he understood causation and the cure. He and his partner (Robin Warren) in the research won the prize for that treatment.
I was told a story about a neuroscience study that involved paralyzing the lead author with curare and manually manipulating his eyeball to study his perception as a result of the manipulation; I was told it was never repeated, although a quick Google to find a reference revealed at least one similar study (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3232712/), so I guess it's more popular than it used to be.
Another famous one was Werner Forssmann, the first doctor to perform a cardiac catheterization, who performed the operation on himself first (then walked over to the X-ray room to get a picture to prove he succeeded).
On the other hand, if someone wants to experiment on themselves, why not let them? Nobody is harmed if it doesn't work, and we can make quick breakthroughs that might be otherwise impossible due to ethics/etc.
I think its like drugs in sports - in principle you are only risking yourself, but if it becomes normalized you will have to do it no matter how you feel about it or risk getting left behind.
Science is (supposed to be) a collaborative effort, not a zero-sum competitive game like sports.
Of course people might get competitive about their careers and feel pressured to stay on top. But the discovery of one researcher should not be to the detriment of others.
Not quite. You’re right that scientific knowledge and career prestige are not zero-sum competitive games, but the competition for scientific funding often is. Anyone who’s sat down to write a grant proposal knows what a struggle it can be to get support for even the most promising research.
If (big if, but it is the topic of the discussion at hand) self-experimentation is increasingly normalized, and through normalization attracts funding, one can imagine the competition for said funds incentivizing researchers and labs to take extreme measures to garner attention and acquire support.
Money, the same motivation that pushes athletes to extremes, is not something scientists are immune to, even if the ultimate utility of the money serves different purposes.
> one can imagine the competition for said funds incentivizing researchers and labs to take extreme measures to garner attention and acquire support.
Yep, I think the realities of allowing such a thing are quite clear: You can't skip the rats and go straight to a human, but you could skip the rats and go straight to yourself. Having human data at an early stage would be a huge advantage for getting funding, and there would only be one way to get it.
Why not let athletes use steroids? It's unsafe. And people might consider risking their health if they feel it could make/break their career. Better to have a strong stigma against self-experimentation to avoid that.
Athletes do unsafe things all the time, with the blessing of their sport. Boxing is unsafe, American football is unsafe, skateboarding is unsafe, skiing is unsafe, extreme sports are unsafe, etc...
The prohibition on steroid use has more to do with fairness and ensuring a level playing field, so no athlete has an unfair advantage over another.
Such considerations are irrelevant when one's goal is not fair competition but advancement of science.
> The prohibition on steroid use has more to do with fairness and ensuring a level playing field, so no athlete has an unfair advantage over another.
This is somewhat circular logic - it's only 'unfair' because you've defined it that way. IMO the idea that there is a level playing field is somewhat of a myth to begin with, there are plenty of other ways to gain "fair" advantage like better training and nutrition, and access to those things is clearly not equal across all competitors. I think you could probably argue the current situation is less fair than just allowing them considering how many top athletes likely use steroids anyway and just haven't been caught.
That's not to say that I think we should allow steroids in sports, just that "a level playing field" doesn't seem like much of a justification to me. I think the simpler reason is that lots of sports already have rules to make the sport safer for their athletes, and banning steroids/drugs simply falls into that same category because it has a clear risk of spiraling out of control. Yes, sports are unsafe, but they're also generally designed to not be so unsafe that competitors are dying all the time due to going to extreme lengths to try and win.
This is quite a different situation because it creates an uneven playing field: to be competitive you will need to use steroids too. If it only affected the athletes many people wouldn't mind.
Most of the cases where self experimentation becomes a viability are cases like in this article. Where funding doesnt exist. There is no competition because there is no funding for the topic. If he hadnt done it, nobody would have. Also, for some people the aim isnt personal profit but actual progress.
I think your take is overly zealous, if you are really worried you could make it a requirement for grants. Which it already is.
edit: Maybe for a different framing, picture it more like climbing a mountain. You do it because its there, it is that simple. All the rest (sponsorships and the like) is just ways to get funding to get you up there. You were already going to go, more funding just makes it saver. And as unhealthy as it is, i think society should not get involved further as me not dragging anyone else in with me. After all, how is this any worse then eating or drinking myself to death. Or giving myself a heart attack (to come back to professional athletes)
Not all steroids are inherently unsafe, especially the more modern ones. In reality the primary argument against steroids is that "its unfair to those who don't"
https://en.wikipedia.org/wiki/Neurotrophic_electrode
> In Neural Signals' implantations of six people, only one had a short lived episode of focal motor seizures and brain swelling leading to temporary weakness on the contralateral side of the body.
> That person was me, Phil Kennedy. And here I am writing this addition to the Wikipedia