Hacker News new | past | comments | ask | show | jobs | submit | seletskiy's comments login

  Location: Armenia
  Remote: Yes, but flexible
  Willing to relocate: Yes, to a country with higher HDI
  Technologies: technology agnostic, everything from microelectronics to Web/GUI frontends
  Résumé/CV: on request, prefer 1:1 video chat
  Email: s.seletskiy@gmail.com
15+ YoE. Architect, lead engineer.

I would like to join biotech, med, eco, civil infra or aerospace projects that aim to tangibly reduce human and animal suffering and figuring out internal workings of Universe we live in.

Excel in small pragmatic teams with no bureaucracy. Able to provide optimized and highly reliable software.

Not interested in blockchain or GenAI projects.


I will never understand people who say that mortality is what gives life a meaning. It is exactly opposite. If I can't observe effects of my actions (and most likely "I" would not be able to do so after death), then it does not matter for me what I do during life, since outcome is all the same.

There should be no death. For whatever reason, it is incredibly hard to find people thinking the same, despite, paradoxially no one wants to die.

Can we chat? My e-mail is in the profile.


Fully aligned, and would love to join forces on such a project. Let's have a chat :)


> I will never understand people who say that mortality is what gives life a meaning.

Same here. I wonder why they don't give even more meaning to their life by killing themselves right now.

Oh wait, it doesn't work that way. Death gives a meaning only when it is in a distant future... or some other excuse like that. But for some people, that future is now, or very soon. They would probably also prefer to have a "meaningful" death later rather than now.


You are confusing You-level and mankind-level, it was never about You. Meaning is there, but its not kind to people who think themselves as center of universe and mandatory part of it (we all are of our own version of reality but thats not what I mean).

Life well lived is a life thats easier to let go, believing in afterlife or not. Now what does that mean is highly individual but for most its around friends, family and children, mostly children. Most prople with kids have no problem seeing that meaning in mortality, plus there are even more logical and potent arguments (resources, selfishness, not ending up with immortal dictator forever etc but thats for longer)


Hey. I know _exactly_ what you are talking about.

I've been thinking about that for a quite awhile now, and I guess I have some perspective, but don't have exact answers and I don't think I can fit it into a HN message nor have enough motivation to do so.

_However_, I'll be more than glad to chat personally, maybe we can figure something out together. So, if you¹ want, drop me an e-mail. Address in profile.

--

¹) You, the topic starter or anyone reading this message who finds it relevant.


Hey. It's nothing very fancy. About 150 lines of C/CUDA code with no deps, including args parsing and logging.

The code runs at steady rate of 18.00-18.40 GH/s at cloud GPU. In fact it's not hashes-per-second, but actually messages-per-second checked.

It launches a 64⁶ kernels in a loop, where each launch checks first two bytes of the SHA of a message concatenated with unique 6-byte nonce per kernel + 6-byte incremental nonce for each launch. There is only one block, so SHA algorithm is heavily trimmed. Also, most of the message is hard-coded, so pre-calculated SHA state is used; it's 10 loops less than needed to encode whole block. Since we only 2 bytes of the hash to check, last two loops also unrolled by hand to exclude parts that we wouldn't need. All code also in big-endian since SHA is, so message hardcoded in big-endian as well.

Base64-encoding is pretty time-consuming, so I've optimized it a bit by reordering the alphabet to be in ASCII-ascending order. I've got to the point where single binary-op optimization can earn 100-500 MH/s speed-up, and I don't really know what else here is remaining.

I don't have RTX4090, so instead I just rented 4090 GPU to run code on. It's less than $0.3 per hour.

I've tried GPT-4 to get some directions for optimization, but every single proposal was useless or straight wrong.

I by no means a C/GPU programmer, so probably it can be optimized much more by someone who more knowledgeable of CUDA.

GPU's are ridiculously fast. It freaks me out that I can compute >18,000,000,000 non-trivial function calls per second.

Anyways, if you want to chat, my e-mail is in the profile.


I think we've done something similar in our kernels, because I've likewise struggled to squeeze more than ~18GH/sec from a rented 4090. I think the design of SHA256 makes it hard to avoid many of the loop iterations. It's possible there are some fun GPU specific instructions to parallelize some of the adds or memory accesses to make the kernel go faster.

If you limit your variable portion to a base16 alphabet like A-P, radix encoding would just be `nibble + 0x41`. Sure you're encoding 4x as many values, but with full branch predictability. You'd have to experimentally determine if that performs any better.

You could also do something theoretically clever with bitmasking the 4 nibbles then adding a constant like 0x4141 or 0x00410041 or something (I'm too tired to think this through) and then you can encode twice as many per op assuming 32-bit bitwise ops are similar in speed to 16-bit bitwise ops.

Anyways this has been a cool challenge. You might also enjoy hunting for amulets - short poems whose sha256 hash contains a substring that's all 8: https://news.ycombinator.com/item?id=26960729


SHA256 is designed as such that the maximum amount of data that can be contained within a single block is 440 bits (55 bytes.)

If you carefully organize the nonce at the end and use all 55 bytes, you can pre-hash the first ~20/64 rounds of state and the first several rounds of W generation and just base further iterations off of that static value (this is known as a "midstate optimization.")

> If you limit your variable portion to a base16 alphabet like A-P

The more nonce bits you decide to use, the less you can statically pre-hash.

In FPGA, I am using 64 deep, 8-bit-wide memories to do the alphabet expansion. I am guessing in CUDA you could something similar with `LOP3.LUT`.


Thanks, I sent you an e-mail, you might need to check in spam folder, as gmail doesn't like my mail server.


Interestingly, I have the ability to increase my resting pulse rate and then decrease it back to normal, without using any specific imagination techniques. I believe this is closely linked to the release of epinephrine into the bloodstream. To achieve this, I simply recall the sensation that arises when my body has the actual epinephrine response. This either tricks my body into responding as if there were an actual epinephrine response, or perhaps a small amount of epinephrine is actually released - I'm not entirely sure. Apart from increased pulse, my pupils also dilate significantly for a short time, which is also can be linked to adrenaline response.

I've had the opportunity to test this in a clinical setting under an ECG. Not only did it increase my pulse, but it also caused the QRS complex to invert. Upon seeing this, the doctors advised me not to continue with this practice. However, I didn't experience any negative effects from this experiment. On the flip side, I haven't found any practical use for this ability either.

I would be intrigued to connect with someone else who has a similar capability. It seems that those internal "feelings" is as close to direct control as we can get.

As of pain, I have similar experience as well, except instead of detaching myself from the pain, I "look" as closely as I can. At a certain point pain decomposes to what it really is — electrical impulses, and from this point it literally starting to feel as electricity going through your body, quite the same feeling as if you accidentally grab both pins of the electrical plug, albeit not as intense.


>On the flip side, I haven't found any practical use for this ability either.

well, you can reliably trick a polygraph


If we're discussing controlled changes in physical condition using various equipment that allows measurement, it's called biofeedback. If done without equipment, it includes various types of autogenic training.


Its as if feelings are the driver, and the physical reality follows.


I like to think of reality is a wide highway, and we have a lot of flexibility in the lane we pick in most cases, and other times we're funneled down to a single lane.

When systems are built on top of systems built on top of systems, cause and effect get lost in multiple levels of hysteresis.


Hey, I've realized the same thing (that my workflows are stack-based) awhile ago, but didn't get to the point of writing a tool yet. Dare to share?

Also, which approach you use to efficiently store and re-store relevant context information? I often find that intricate but important details are lost during context switch.


> Hey, I've realized the same thing (that my workflows are stack-based) awhile ago, but didn't get to the point of writing a tool yet. Dare to share?

Sure, but because I'm only using it for myself, you're going to have to compile it yourself if you want (for now, anyway).

Originally designed to be CLI app, I find I only use the GUI these days (also in the repo).

Start here in case this is not for you: https://github.com/lelanthran/frame/blob/master/docs/FrameIn...

> Also, which approach you use to efficiently store and re-store relevant context information? I often find that intricate but important details are lost during context switch.

Everything is stored in a hierarchical DB, AKA the filesystem :-)

Each 'frame' is a directory with a specific set of files, and the directory name serves as the name of the frame. At any given time, a metainfo file in the root points to the currently active frame.


Yeah. Read "Exploring the World of Lucid Dreaming" book by Stephen LaBerge. It proposes a number of techniques, some of them works better than other for different people. If you are not predisposed to lucid dreaming (some people are), then the main obstacle is will or motivation to keep practicing.

One that worked for me is having a mechanical counter of some sort (like lap couter or just miniature code lock so you can use it as a counter), which you reset every morning and increment during the day every time after you ask yourself "Am I dreaming?" and doing some basic "reality check" (e.g. trying to breathe in through your nose when you pinch it, or checking clocks twice in a succession and validating that it makes sense, or just looking at your hands). The goal is to consistently do a number of reality checks during your waking life, which inevitably will increase the chance that you will do it in your dream. That will likely would cause you to wake up immediately for couple of times, but then you'll get used to it. It works simply because "if you don't ask yourself that you are dreaming during your waking life, what is the chance that you'll ask yourself it during sleep?".

There are number of other and more advanced techniques that borderline with "magic" to me, like falling asleep without loosing conciousness (and then waking up the same way), but I had this experience only a couple of times I guess.


It’s not the getting to lucid dreaming that’s the problem, it’s the “oh! I’m dreaming!” part. Wakes me up every time.

That and I have dreams about flying, but as soon as I realize I’m having dreams about flying, I wake up and the flying dreams stop.


It is a common problem when you starting to get lucid dreams. There are number of techniques to combat that too, but it will generally go away after you become used to it. You just get too excited when you get lucid dream.

Maybe sound absurd, one technique that worked for me is to just "start spinning" around in your dream, so you can't focus on any specific part of the dream for a long time. Continuous attention to one detail in a dream somehow starts to breaking it apart.


> Continuous attention to one detail in a dream somehow starts to breaking it apart.

But that doesn't make for a very interesting experience. I actively attempted lucid dreaming about 25 years ago, and was succesful after a while using the various reality check techniques and dream diary. And the habit has weirdly stayed with me over the years such that I still occassionally realise I am dreaming even now. But all I can ever do is wonder around for a minute or two. Over all that time I've never managed to do anything interesting like fly or conjure things or really scrutinise the dream. Those wake me up every time. I've tried the spinning technique, but that just dissolved the dream and then woke me up.


I guess it is a matter of experience and practice, as with anything else, really.

There is a very subtle mental state between "I am too aware that I am dreaming" and "there is no awareness of dreaming at all". I don't really know how to put it in words, but it seems that you BOTH need to supress part of the brain that wakes you up and prevent loosing awareness at the same time. It is very apparent during "ordinary" falling asleep, when you suddenly catch yourself that you are "seeing pictures" (hypnagogia phase) and become fully awake again. The trick is continue "falling asleep" without loosing awareness. Same applies during the lucid dream. You need to maintain balance.

You basically need to continue experimenting to notice those subtle changes to know which mental state would wake you up and which would not, so you'll get more precise control.

There are other factors at play for sure, like if you will manage to get your lucid dream right after first deep phase of sleep, your body would likely be not rested enough to quickly reach wakefulness, so you'll have more time.

I don't think that lucid dreaming is easy, and it was never easy for me. It was actually pretty hard work. It was almost impossible to get lucid dream if I was already mentally exhausted during the day. As soon as I stopped to practice, lucid dreams stopped too.


> It’s not the getting to lucid dreaming that’s the problem, it’s the “oh! I’m dreaming!” part. Wakes me up every time.

Yes, same here. I can sometimes manage to keep the dream going for what feels like a few minutes, if nothing extraordinary happens. If it is just mundane walking around and feeling the dream, it is fine. But when I start trying to make something interesting happen, I wake up.


Apparently I was even more obsessed. Wanna "trade" solutions?


Sure, shoot me an email - let's chat gpu tech! m at zorinaq.com


I don't think it's a breach of 1NF. The "chain" icon next to the column name suggests that it is not a "real" field, but rather some kind of linked field that shows entries from the "Purchases" table for each user.


That would be cool, if true. I would like to see what the underlying query looks like, and whether it is efficient.


Yes, these are essentially links to the corresponding rows in the child table. Airtable solves this problem via the "linkedRecordsField", though I'm not sure it's actually "solved", as the interface around them is a bit clunky and having more than a few is pointless as they get truncated and you can't see them regardless.

I've spent quite a bit of time plumbing Airtable's APIs, and am currently developing an app around Airtable to address some of these issues (zemanir.com) in case anyone is interested.


Yep, it's a one-to-many autogenerated backlink, helpful for seeing every e.g. purchase for a user.

The lack of backlinks is one of the big frustrations I've had in other DB apps, since "this thing has many of these" is such a common pattern.


With the new advancements in AI, it's expected that there will be more code being generated by programs rather than written by programmers. Making sure these programs deliver accurate and error free code is a difficult task. The challenge here lies in effectively proofreading AIs so that any bugs are caught before they reach end users. Good tools such as unit tests can help but may not be enough to make sure AI generated code is flawless. As the use of AI increases, we must look to develop better methods for ensuring quality assurance on these programs if they are to truly be useful.


> it's expected that there will be more code being generated by programs rather than written by programmers.

That has probably been case since the 80s; nearly all code running on your CPU is generated by a program ("compiler") from a textual description ("source")


That's not what is meant. Programming languages are basically huge macro systems that expand to assembly, but the code is still written by a human.

Layers of abstraction are a problem too, but they are a different problem.


In my career I've seen resources pulled away from QA teams (sometimes completely disbanding). It would be interesting to see things swing the other way.


This feels GPT generated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: