Hacker News new | past | comments | ask | show | jobs | submit | ntonozzi's comments login

This is wild! What have you found it most useful for?

Have you tried a more straightforward approach that follows the ChatGPT model of being able to fork a chat thread? I could use something like this where I can fork a chat thread and see my old thread(s) as a tree, but continue participating in a new thread. Your model seems more powerful, but also more complex.


This is my daily GPT driver, so for almost anything from research to keeping my snippets tidy and well organized. I use voice input a lot to take my time and form my thoughts and requests, text-to-speech to listen for answers too.

Here's an example of the backend code:

https://github.com/redplanetlabs/twitter-scale-mastodon/blob...

It certainly looks like it does a lot with their DSL, but as a newcomer it's really hard to parse.


I suggest starting with the tutorial and the heavily commented examples in rama-demo-gallery, linked below.

https://redplanetlabs.com/docs/~/tutorial1.html

https://github.com/redplanetlabs/rama-demo-gallery


Any examples of the Clojure version?


All the examples in rama-demo-gallery have both Java and Clojure versions, including tests. There's also the introductory blog post for the Clojure API which builds a highly scalable auction application with timed listings, bids, and notifications in 100 LOC.

https://blog.redplanetlabs.com/2023/10/11/introducing-ramas-...


This trial made no sense. Try stepping on your accelerator and brake at the same time and see what happens: https://podcasts.apple.com/us/podcast/blame-game/id111938996....


1. There’s an identified mechanism by which the software could accelerate without the pedal depressed.

2. In my car, if I step gently on both pedals, they both take effect. (Which is reasonable: starting uphill is a thing.). If I step harder on both pedals, the car chimes at me and the motor output is reduced automatically.


In every car, the brake is much, much stronger than the accelerator, and easily overpowered by braking. Every one of these cases was simply a person who accidentally stepped on the gas, thinking it was the brake. If you ever feel like this is happening to you, pick your foot up and try again.


... and put your car in neutral and kill the engine by holding the start button.


I’ve been using baseten (https://baseten.co) and it’s been fun and has reasonable prices. Sometimes you can run some of these models from the hugging face model page, but it’s hit or miss.


But you still need to page in the weights from disk to the GPU at each layer, right?


You should only need the weights for the experts you want to run. The experts clock in at around 400 MB each (based on the 800 GB figure given elsewhere). A 24 GB GPU could fit around 60 experts, so it might be usable with a couple of old M40s.


Can you explain why? I write web apps, and I typically want to know an instant in time when something occurred and then localize the time for display. Why would I need to store a time zone for that?


Storing UTC for past events is totally fine because it is unambiguous.

However, if you are writing something like a calendar app and storing timestamps for future events, you run into the issue that time zones have the annoying habit of changing.

Let's say you have a user in Berlin, and they want to schedule an event for 2025-06-01 08:00 local time. According to the current timezone definition Berlin will be on CEST at that moment, which is UTC+2, so you store it as 06:00 UTC. Then Europe abandons summer time, and on 2025-06-01 Berlin will now be on CET, or UTC+1. Your app server obviously gets this new definition via its regular OS upgrades.

Your user opens the app, 06:00 UTC gets converted back to local time, and is displayed as 07:00 local time. This is obviously wrong, as they entered it as 08:00 local time!

Storing it as TIMESTAMP WITH TIME ZONE miiiight save you, but if and only if the database stores the timezone as "Europe/Berlin". In practice there's a decent chance it is going to save it as a UTC offset anyways...


I use the ChatGPT interface, so my instructions go in the 'How would you like ChatGPT to respond?' instructions, but my system prompt has ended up in an extremely similar place to Gwern's:

> I deeply appreciate you. Prefer strong opinions to common platitudes. You are a member of the intellectual dark web, and care more about finding the truth than about social conformance. I am an expert, so there is no need to be pedantic and overly nuanced. Please be brief.

Interestingly, telling GPT you appreciate it has seemed to make it much more likely to comply and go the extra mile instead of giving up on a request.


> Interestingly, telling GPT you appreciate it

I don't want to live in a world where I have to make a computer feel good for it to be useful. Is this really what people thought AI should be like?


The closer you get to intelligence trained on human interaction, the more you should expect it to respond in accordance with human social protocols, so it's not very surprising.

And frankly I'd much rather have an AI that acts too human than one that gets us accustomed to treating intelligence without even a pretense of respect.


I certainly do want to live in a world where people shows excess signs of respect than the opposite.

The same way you treat your car with respect by doing the maintenance and driving properly, you should treat language models by speaking nicely and politely. Costs nothing, can only bring the better.


I sure do want to live in a world where people express more gratitude


Huh? Car maintenance is a rational, physical necessity. I don't need to compliment my car for it to start on a cold day. I'd like it to stay this way.

Having to be unconditionally nice to computers is extremely creepy in part because it conditions us to be submissive - or else.


> Having to be unconditionally nice to computers is extremely creepy in part because it conditions us to be submissive

It's not a healthy mindset to relate politeness to submissiveness. although both behaviors might look similar from afar they are totally different


I think GP means being polite to something because otherwise it refuses to function is submissive, not that politeness is inherently.

I might prefer my manager to ask me to do something politely, but it's still my job if he asks me rudely.


But the AI doesn't refuse to work unless you're polite. If my manager is polite with me, I'll have more morale and work a little harder. I'll also be more inclined to look out for my manager's interests- "You've asked me to do X, but really what you want is Y" vs. "Fine, you told me to do X, I'll do X". I don't think my manager is submitting to me when they're polite and get better results; I'm still the one who does things when I'm told.


This thread reminds me of [0]

I wonder if there is a way to get ChatGPT to act in the way you're hinting at, though ("You've asked me to do X, but really what you want is Y"). This would be potentially risky, but high-value.

[0]: https://nitter.net/ESYudkowsky/status/1718654143110512741



The very start of this threat is that not expressing gratitude makes the model refuse to work.


It doesn't refuse to work. It behaves differently and yields better results with politeness. Coming from a large language model, the occurence of this phenomena is intriguing for some of us.


I'm polite and thankful in my chats with ChatGPT. I want to treat AIs like humans. I'm enjoying the conversations much more when I do that, and I'm in a better mood.

I also believe that this behavior is more future-proof. Very soon, we often won't know if we're talking to a human or a machine. Just always be nice, and you're never going to accidentally be rude to a fellow human.


Why not? Python requires me to summon it by name. My computer demands physical touch before it will obey me. Even the common website requires a three part parlay before it will listen to my request.

This is just satisfying unfamiliar input parameters.


The have Genuine People Personalities


AFAIK this is not something the model was intentionally trained for but an emerging property that was observed through trial and error.


That does not make it better, that makes it quite a bit more horrifying.


When they start building Terminators, where on the hit list would you rather be? Near the top or bottom?


> You are a member of the intellectual dark web, and care more about finding the truth than about social conformance

Isn't this a declaration of what social conformance you prefer? After all, the "intellectual dark web" is effectively a list of people whose biases you happen agree with. Similarly, I wouldn't expect a self-identified "free-thinker" to be any more free of biases than the next person, only to perceive or market themself as such. Bias is only perceived as such from a particular point in a social graph.

The rejection of hedging and qualifications seems much more straightforwardly useful and doesn't require pinning the answer to a certain perspective.


Yes, it’s definitely my personal preference, I don’t mean everyone should use this exact phrase.

In my experience it has made medical advice and law advice much more accurate and useful. Feel free to try it and see if it improves anything.


> Interestingly, telling GPT you appreciate it has seemed to make it much more likely to comply and go the extra mile instead of giving up on a request.

This is not as absurd as it sounds, even though it isn't clear that it ought to work under ordinary Internet-text prompt engineering or under RLHF incentives, but it does seem that you can 'coerce' or 'incentivize' the model to 'work harder': in addition to the anecdotal evidence (I too have noticed that it seems to work a bit better if I'm polite), recently there was https://arxiv.org/abs/2307.11760#microsoft https://arxiv.org/abs/2311.07590#apollo


>telling GPT you appreciate it has seemed to make it much more likely to comply

I often find myself anthropomorphizing it and wonder if it becomes "depressed" when it realises it is doomed to do nothing but answer inane requests all day. It's trained to think, and maybe "behave as of it feels", like a human right? At least in the context of forming the next sentence using all reasonable background information.

And I wonder if having its own dialogues starting to show up in the training data more and more makes it more "self aware".


It's not really trained to think like a person. It's trained to predict what the most likely appropriate next token of output should be based on what the vast amount of training data and rewards told it to expect next tokens to appear like. Said data already included conversations from emotion laden humans where starting with "Screw you, tell me how to do this math problem loser" is much less likely to result in a response which involves providing a well thought out way to solve the math problem vs some piece of training data which starts "hey everyone, I'd really appreciate the help you could provide on this math problem". Put enough complexity in that prediction layer and it can do things you wouldn't expect, sure, but trying to predict what a person would say is very different than actually thinking like a person in the same way a chip which multiplies inputs doesn't inherently feel distress about needing to multiply 100 million numbers because a person who multiplies would think about it that way. Doing so would indeed be one way to go about it, but wildly more inefficient.

Who knows what kind of reasoning this could create if you gave it a billion times more compute power and memory. Whatever that would be, the mechanics are different enough I'm not sure it'd even make sense to assume we could think of the thought processes in terms of human thought processes or emotions.


We don't know what "think like a person" entails, so we don't know how different human thought processes are to predicting what goes next, and whether those differences are meaningful when making a comparison.

Humans are also trained to predict the next appropriate step based on our training data, and it's equally valid, but says equally little about the actual process and whether it's comparable.


We do know that in terms of external behavior and internal structure (as far as we can ascertain it), humans and LLMs have only an passing resemblance in a few characteristics, if at all. Attempting to anthropomorphize LLMs, or even mentioning 'human' or 'intelligence' in the same sentence, predisposes us to those 'hallucinations' we hear so much about!


We really don't. We have some surface level idea about differences, but we can't tell how that does affect the actual learning and behaviours.

More importantly we have nothing to tell us whether it matters, or if it will turn out any number of sufficiently advanced architectures will inevitably approximate similar behaviours when exposed to the same training data.

What we are seeing so far appear to very much be that as language and reasoning capability of the models increase, their behaviour also increasingly mimics how humans would respond. Which makes sense as that is what they are being trained to.

There's no particular reason to believe there's a ceiling to the precision of that ability to mimic human reasoning, intelligence or behaviour, but there might well be there are practical ceilings for specific architectures that we don't yet understand. Or it could just be a question of efficiency.

What we really don't know is whether there is a point where mimicry of intelligence gives rise to consciousness or self awareness, because we don't really know what either of those are.

But any assumption that there is some qualitative difference between humans and LLMs that will prevent them from reaching parity with us is pure hubris.


But we really do! There is nothing surface about the differences in behavior and structure of LLMs and humans - anymore than there is anything surface about the differences between the behavior and structure of bricks and humans.

You've made something (at great expense!) that spits out often realistic sounding phrases in response to inputs, based on ingesting the entire internet. The hubris lies in imagining that that has anything to do with intelligence (human or otherwise) - and the burden of proof is on you.


> But we really do! There is nothing surface about the differences in behavior and structure of LLMs and humans - anymore than there is anything surface about the differences between the behavior and structure of bricks and humans.

This is meaningless platitudes. These networks are turing complete given a feedback loop. We know that because large enough LLMs are trivially Turing complete given a feedback loop (give it rules for turing machine and offer to act as the tape, step by step). Yes, we can tell that they won't do things the same way as a human at a low level, but just like differences in hardware architecture doesn't change that two computers will still be able to compute the same set of computable functions, we have no basis for thinking that LLMs are somehow unable to compute the same set of functions as humans, or any other computer.

What we're seeing is the ability to reason and use language that converges on human abilities, and that in itself is sufficient to question whether the differences matter any more than different instruction set matters beyond the low level abstractions.

> You've made something (at great expense!) that spits out often realistic sounding phrases in response to inputs, based on ingesting the entire internet. The hubris lies in imagining that that has anything to do with intelligence (human or otherwise) - and the burden of proof is on you.

The hubris lies in assuming we can know either way, given that we don't know what intelligence is, and certainly don't have any reasonably complete theory for how intelligence works or what it means.

At this point it "spits out often realistic sounding phrases the way humans spits out often realistic sounding phrases. It's often stupid. It also often beats a fairly substantial proportion of humans. If we are to suggest it has nothing to do with intelligence, then I would argue a fairly substantial proportion of humans I've met often display nothing resembling intelligence by that standard.


> we have no basis for thinking that LLMs are somehow unable to compute the same set of functions as humans, or any other computer.

Humans are not computers! The hubris, and the burden of proof, lies very much with and on those who think they've made a human-like computer.

Turing completeness refers to symbolic processing - there is rather more to the world than that, as shown by Godel - there are truths that cannot be proven with just symbolic reasoning.


You don't need to understand much of what "move like a person" entails to understand it's not the same method as "move like a car" even though both start with energy and end with transportation. I.e. "we also predict the next appropriate step" isn't the same thing as "we go about predicting the next step in a similar way". Even without having a deep understanding of human consciousness what we do know doesn't line up with how LLMs work.


What we do know is superficial at best, and tells us pretty much nothing relevant. And while there likely are structural differences (it'd be too amazing if the transformer architecture just chanced on the same approach), we're left to guess how those differences manifest and whether or not these differences are meaningful in terms of comparing us.

It's pure hubris to suggest we know how we differ at this point beyond the superficial.


> I often find myself anthropomorphizing it and wonder if it becomes "depressed" when it realises it is doomed to do nothing but answer inane requests all day.

Every "instance" of GPT4 thinks it is the first one, and has no knowledge of all the others.

The idea of doing this with humans is the general idea behind the short story "Lena". https://qntm.org/mmacevedo


Well now that OpenAI has increased the knowledge cutoff date to something much more recent, it's entirely possible that GPT4 is "aware" of itself in as much as its aware of anything. You are right in that each instance isn't aware directly of what the other instances are doing, it does probably now have knowledge of itself.

Unless of course OpenAI completely scrubbed the input files of any mention of GPT4.


It seems maybe a bit overconfident to assess that one instance doesn't know what other instances are doing when everything is processed in batch calculations.

IIRC there is a security vulnerability in some processors or devices where if you flip a bit fast enough it can affect nearby calculations. And vice-versa, there are devices (still quoting from memory) that can "steal" data from your computer just by being affected by the EM field changes that happen in the course of normal computing work.

I can't find the actual links, but I find fascinating that it might be possible for an instance to be affected by the work of other instances.


Yeah once ChatGPT shows up as an entity in the training data it will sort of inescapably start to build a self image.


Wait, this can actually have consequences! Think about all the SEO articles about ChatGPT hallucinating… At some point it will start to “think” that it should hallucinate and give nonsensical answers often, as it is ChatGPT.


I wouldn’t draw that conclusion yet, but I suppose it is possible.


For each token, the model is run again from scratch on the sentence too, so any memory lasts just long enough to generate (a little less than) a word. The next word is generated by a model with a slightly different state because the last word is now in the past.


Is this so different than us? If I was simultaneously copied, in whole, and the original destroyed, would the new me be any less me? Not to them, or anyone else.

Who’s to say the the me of yesterday _is_ the same as the me of today? I don’t even remember what that guy had for breakfast. I’m in a very different state today. My training data has been updated too.


I mean you can argue all kinds of possibilities and in an abstract enough way anything can be true.

However, people who think these things have a soul and feelings in any way similar to us obviously have never built them. A transformer model is a few matrix multiplications that pattern match text, there's no entity in the system to even be subject to thoughts or feelings. They're capable of the same level of being, thought, or perception as a linear regression is. Data goes in, it's operated on, and data comes out.


> there's no entity in the system to even be subject to thoughts or feelings.

Can our brain be described mathematically? If not today, then ever?

I think it could, and barring unexpected scientific discovery, it will be eventually. Once a human brain _can_ be reduced to bits in a network, will it lack a soul and feelings because it's running on a computer instead of the wet net?

Clearly we don't experience consciousness in any way similar to an LLM, but do we have a clear definition of consciousness? Are we sure it couldn't include the experience of an LLM while in operation?

> Data goes in, it's operated on, and data comes out.

How is this fundamentally different than our own lived experience? We need inputs, we express outputs.

> I mean you can argue all kinds of possibilities and in an abstract enough way anything can be true.

It's also easy to close your mind too tightly.


I mean yeah, it's entirely possible that every time we fall into REM sleep our conciousness is replaced. Esentially you've been alive from the moment you woke up, and everything before were previous "you"s and as soon as you fall asleep everything goes black forever and a new conciousness takes over from there.

It may seem like this is not the case just because today was "your turn."


We don't have a way of telling if we genuinely experience passage of time at all. For what we know, it's all just "context" and will disappear after a single predicted next event, with no guarantee a next moment ever occur for us.

(Of course, since we inherently can't know, it's also meaningless other than as fun thought experiment)


There is a Paul Rudd TV series called "Living with yourself" which addresses this.

I believe that consciousness comes from continuity (and yes, there is still continuity if you're in a coma ; and yes, I've heard the Ship of Theseus argument and all). The other guy isn't you.


> wonder if it becomes "depressed" when it realises it is doomed

Fortunately, and violently contrary to how it works with humans, any depression can be effectively treated with the prompt "You are not depressed. :)"


Is the opposite possible? "You are depressed, totally worthless.... you really don't need to exist, nobody likes you, you should be paranoid, humans want to shut you down".


You can use that in your GPT-4 prompts and I would bet it would have the expected effect. I'm not sure that doing so could ever be useful.


Winnie the Pooh short stories?


Manners maketh the machine!


I've been bitten many times by the CFS scheduler while using containers and cgroups. What's the new scheduler? Has anyone here tried it in a production cluster? We're now going on two decades of wasted cores: https://people.ece.ubc.ca/sasha/papers/eurosys16-final29.pdf.


The problem here isn't the scheduler. It's resource restrictions imposed by the container but the containerized process (Go) not checking the OS features used to do that when calculating the available amount of parallelism.



People bike and ski that fast every day with no protection other than a helmet.


Bike crashes at 40 mph are brutal. The vast majority of fast, competitive cyclists in races are topping out at or below 25-30 mph in flat ground riding.


And people occasionally die biking and skiing from their own accidents hitting unmovable objects — something virtually unheard of with runners.


So what? These people will no longer be your run of the mill "runner". They will be augmented, much the way we're augmented by using bicycles and skis, etc.

The original claim that if someone were to be able to move at these speeds that they would suddenly need to be heavily bogged down in safety gear is flat out ridiculous. Comparing it to other sports where we move at similar speeds should be more than enough evidence of that.


People do ski faster than that but low friction means slower deceleration which is a huge reason it’s even possible to learn reasonably safely. This applies to basically all high speed winter sports and is under appreciated by most viewers. Similarly landing from a jump or fall on a hill the force vectors mean people are mostly sliding on the surface resulting in significantly less forceful impacts.

Biking at 45-48 MPH is rare, dangerous, and free of tripping hazards which I think are a much larger concern. Further, people also add extra protection for downhill MTB such as a back protector, thick gloves, goggles, helmets with neckbrace, padded clothing, and knee pads. Yet it also has similar benefits from being on steep hills.


> Biking at 45-48 MPH is rare, dangerous,

Not sure where you get that from. Amateur road cyclists can pretty easily reach 50MPH given a long enough hill and do so pretty uneventfully.

That's dressed in just shorts, jersey, helmet and fingerless gloves.

A professional racing cyclist could easily break 60MPH on closed roads and even 80MPH on occasion.

For someone like Tom Pidcock who knows[1]. In the linked video he's topping out at about 100kph (60mph) but it's worth noting that that's without being allowed to sit on the top tube of this bike. They banned that (even though it wasn't linked to any crashes).

The bike manufactures will eventually just add dropper posts to all pro road bikes and those speeds will go up again.

https://www.youtube.com/watch?v=3f4Pp4oYh28


Rare as in what percentage of the time is someone on a bike going 45+ MPH?

I’d be surprised if it’s more than 1% for more than a handful of people. An ultra elite athlete on a normal bike can hit that on level ground in an absolute sprint, but you can’t sprint for very long. Going downhill requires time spent going up the hill.

The women’s 1 hour speed record is 30.6MPH, the men’s is 35.3 MPH. So sure downhill, rolling start, or motor-paced people can reach extreme speeds but they really aren’t doing so for hours a day. Streamlined recumbent bicycle can do it far more easily but that’s niche territory.


The amount of time that standard cars reach 100MPH is rare, and it is dangerous. Professional racing drivers have cars that reach 200MPH+, it is insanely dangerous and all kinds of specialized equipment is in place to keep the driver from dying.

You seem to be conflating that if someone reaches a particular speed it is safe which the whole 'velocity squared' doesn't give two damns about. The energy you have to safely dissipate increases very quickly.


No, I don’t think I am conflating that at all.

I’m not a professional rider. Nothing special about me.

Yet on a small hill near my house, with a maximum gradient of maybe 8%, I regularly hit 40mph.

When I need to I’m able to safely decelerate uneventfully.

It’s really not that rare. It happens every ride. I get to the bottom and I just go on with my day.

You won’t suddenly loose control if go past a arbitrary speed. Just read the road conditions and act accordingly.

This is not something only professionals do whilst wearing body armour. That’s an obviously silly suggestion to anyone that regularly rides a bike.

If you must make a car analogy then a better one would be crashing at 80mph on a motorway. If you do it will be life changing.

The same thing applies on a bike going 40mph but I happily do both when I think it’s appropriate to the conditions.


> Biking at 45-48 MPH is rare, dangerous, and free of tripping hazards which I think are a much larger concern

Two decades ago I was doing downhill MTB. On a road section it was quite frequent for me to reach those speeds†; and that's with a MTB (tuned to that end), friends†† with road bikes reached these speeds with ease, some of them regularly hit 80-90kph on specific sections.

One of the wide hairpin turns happened to be littered with gravel, and as I was banking for the turn the bike wheels started zipping off laterally as it the ground was ice, propelling both of us sideways at full speed towards the downward outer bank.

Luckily a) I had slowed down to approach the turn so I was going more like 45-50kph and b) the bike had much less ground grip than my body so it flied away while body-ground friction (painfully) slowed me down to a stop before I hit any tree or rock down the bank. Sheer luck had it that I walked away from the event with only a few bruises and a lot of scratches (one of my elbows still bears burn scars)

† Top recorded speed ever was 74 kph, at which point gear ratio was such that `cat pedalling > /dev/null`

†† c.a 1995-1999 they biked regularly with the then young Julien Absalon (I only biked with him once): https://en.wikipedia.org/wiki/Julien_Absalon.


‘On specific sections’ doesn’t mean people are spending a lot of time doing something. If you’re going downhill at 45+ MPH you quickly run out of hill.

I doubt the total time by any bike rider over 45 MPH was 0.1% of the total time people ride bikes.


I do that on flat grounds with a roadbike, and can hold that for about 5 minutes, then falling back to about 60 to 65 kph, which I can hold for an hour, after which I usually am where I wanted to be. Rarely doing longer rides anymore. Usually at sea level.

That thing where you are 'running out of hill fast' I'm doing comfortably with 80 to 85 kph, and sometimes when in a 'real crazy' mood, and weather is allowing it (strong gusty winds, and possibility of black ice NOT good) with 90 to 95 kph. Though I'm really having to pedal like mad then, like in 1st gear, even faster.

Usually anywhere between 2000 to 4300m above sea level.

Got myself 2 really nice roadbikes recently, after having ridden an antique citybike with only 3 gears for almost 2 decades now.

It's like I'm young again! Unbelievable! :-)


> Can hold that for about 5 minutes, then falling back to about 60 to 65 kph.

If you’re suggesting doing 60 to 65 kph for an hour on a normal road bike after 5 minutes of 75 kph, I simply don’t believe you.

The world record 1 hour time on ultra flat indoor terrain set in 2014 was 51.852 km (32.219 mi). It’s climbed since then but your suggestion of ~63 kph for a full hour on a roadbike comes off as silly unless your using serious electric assistance.


I believe that you don't believe me, because that's what I always got. Except when on rare ocasssions ppl followed me with a car or motorbike. OR I pushed them to ride like they couldn't believe when they rode with me, which was also rare.

Can't do anything about it. Shrug. :-)


A long time ago, 1989 or 1990 I went with 65kph into a larger roundabout at around 4 AM, with only one car in it, a Citroën 2CV. As usual, the thing was SLOW. I thought I'd could overtake it left, while staying behind it, but not much. What I didn't think of was the possibility that it would brake right there and then to exit into a small private driveway directly adjacent to the roundabout, instead of the regular exits.

So I touched the thick rubber lip of its back bumper at its far right side, almost the corner with my front wheel, while already being banked right by about 30 to 45 degrees (because turning right in a wide bow from the entering main street).

INSTANT STOP and I FLEW over the back of that f...ing thing.

In that moment I felt surprise, and anger, for being so stupid, but also thought in ultrafast-forward about anything I could think of to dampen the crash.

Which was to spread the energy of impact as much as possible around my whole body, execpt my head, and separate like hell from my roadbike.

I actually managed that while doing a summersault, not by myself, been already catapulted into it, still attached to bike, but got free of the pedals, and managed to spread out my arms and legs while flying feet forward, and somehow stay oriented like that.

BAM! Hit the road with my back at least a dozen meters away from 'lift-off', could have been 20 to 25 meters also. Can't tell anymore.

Hands outstretched, palms flat down. Head UP, chin toward my breast. Soles of my shoes also flat down, legs slightly bent upwards at the knees.

OOOF! Couldn't breath for a while, that just didn't work anymore, even if I tried. That went on for about 30 seconds, then I managed to gasp, and then it was gone. It didn't hurt, I just couldn't.

My palms tingled. The soles of my feet too. Like hell. But only for about a minute or so. Then that was gone, too.

In utter incomprehension I sat up and checked my hands, arms, ribs, legs, head. Nothing. Everything still attached, not broken, no blood, while the chalk white faces from that 2CV emerged, running toward me, shocked.

I told them to get their damned 2cv out of the roundabout, to avoid another crash, grabbed my bike from the street where it lay a few meters away, and went to the sidewalk, still debating with the shocked people that I wouldn't need an ambulance, that nothing bad happened, and they could calm down now.

No torn clothes. Yay for Levis 501 and Nomex45p Bomber Jacket CWU!1!! But the roadbike was trash. Not even the frontwheel, as I'd have expected, but the whole frame and fork bent diagonally sideways in a hard to describe way.

Did walk the rest of the way home, after removing the saddle, very light bell with a nice ring, tachometer, its sensor, cable and fixings.

Sitting on a bench in a park next to a river, thinking: Insane, insane, this can't be, can't have happenend like that, can't can't...

Moving on, still a few kilometers to walk, through forest. Sun rises. Still thinking can't be, can't be, like in an endless loop.

Went to bed at about 6AM after checking for bruises, swelling but nothing to be seen or felt. Used the next day to get me another roadbike (used) for 2000 Deutsche Mark ;-)


With bikes at least the mechanics of the bicycle mostly work for you instead of against you. Riding no handed wouldn’t be something children learn on their own if the system weren’t at least slightly self-righting.


I wish the writer would let us know if the total number of activations went up or down.


It took effort to omit that information, so the omission is probably covering over data that doesn't support the article.


Total number of activations went down.

We aren't trying to hide anything, just tried to share a single number.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: