Hacker News new | past | comments | ask | show | jobs | submit | bartwr's comments login

If someone believed they will earn 2-5x better than in academia, with full freedom to work on whatever interests them, and no need to deliver value to the employer... Well, let's say "ok", we have all been young and naive, but if their advisors have not adjusted their expectations, they are at fault, maybe even fraudulent.

Even being in elite research groups at the most prestigious companies you are evaluated on product and company Impact, which has nothing to do with how groundbreaking your research is, how many awards it gets, or how many cite it. I had colleagues at Google Research bitter that I was getting promoted (doing research addressing product needs - and later publishing it, "systems" papers that are frowned upon by "true" researchers), while with their highly cited theoretical papers they would get a "meet expectations" type of perf eval and never a promotion.


Yet your Google Research colleagues still earned way more than in academia, even without the promo.

Plus, there were quite a few places where a good publication stream did earn a promotion, without any company/business impact. FAIR, Google Brain, DM. Just not Google Research.

DeepMind didn't have any product impact for God knows how many years, but I bet they did have promos happening:)


You don't understand the Silicon Valley grind mindset :) I personally agree with you - I am happy working on interesting stuff, getting a good salary, and don't need a promo. Most times I switched jobs it was a temporary lowering of my total comp and often the level. But most Googlers are obsessed with levels/promotion, talk about it, and the frustration is real. They are hyper ambitious and see level as their validation.

And if you join as a PhD fresh grad (RS or SWE), L4 salary is ok, but not amazing compared to costs of living there. From L6 on it starts to be really really good.


I assure you, before the LLM race, those research shops (DM, FAIR) had many directors that didn't contribute to any product whatsoever.

> I am happy working on interesting stuff, getting a good salary, and don't need a promo

People who don't contribute to the bottom line are the first to get a PIP or to be laid off. Effectively the better performers are subsidizing their salary, until the company sooner or later decides to cut dead wood.


> with full freedom to work on whatever interests them, and no need to deliver value to the employer

You know that in academia you constantly have to beg for money by trying to convince government agencies that you’re bringing them value right?


> full freedom to work on whatever interests them, and no need to deliver > value to the employer...

That was an exaggeration. No employee has full freedom, and I am sure it was expected that you do something which within some period of time, even if not immediately, has prospects for productization; or that when something becomes productizable, you would then divert some of your efforts towards that.


It wasn't an exaggeration! :) The shock of many of my colleagues (often not even junior... sometimes professors who decided to join the industry) "wait, I need to talk to product teams and ask them about their needs, requirements, trade-offs, and performance budgets and cannot just show them my 'amazing' new toy experiment I wrote a paper about that costs 1000x their whole budget and works 50% of time, and they won't jump to putting it into production?" was real. :) They don't want to think about products and talk to product teams (but get evaluated based on research that gets into products and makes a difference there), just do Ivory tower own research.

One of many reasons why Google invented Transformers and many components of GPT pre-trainint, but ChatGPT caught them "by surprise" many years later.


Well there are a few. The Distinguished Scientists at Microsoft Research probably get to work on whatever interests them. But that is a completely different situation from a new Ph.D. joining a typical private company.

I believe the above post was highlighting that as a misconception young people may have, not saying it is the case.

Someone correct me if this is wrong, but wasn't that pretty much the premise of Institute for Advanced Study? Minus very high-paying salaries. Just total intellectual freedom, with zero other commitments and distractions.

I know Feynman was somewhat critical to IAS, and stated that the lack of accountability and commitment could set up researchers to just follow their dreams forever, and eventually end up with some writers block that could take years to resolve.


> Minus very high-paying salaries.

They very high salaries are central to the situation.

If you remove high salary then you have a lot more freedom. The tradeoff is the entire point of discussion.


Come be a professor in the Netherlands! You can even run a company on the side. Freedom is real. You don’t get paid well for it.

> you are evaluated on product and company Impact, which has nothing to do with how groundbreaking your research is,

I wonder... There are some academics who are really big names in their fields, who publish like crazy in some FAANG. I assume that the company benefits from just having the company's name on their papers at top conferences.


One unique and new feature of Slang that sets it apart from existing shading languages is support for differentiation and gradient computation/propagation - while still cross-compiling generated forward and backward passes to other, platform-specific shading languages. Before, the only way to backpropagate through shader code (such as material BRDF or lighting computation) was to either manually differentiate every function and chain them together, or rewrite it in another language or framework - such as PyTorch or a specialized language/framework like as Dr.Jit, and keeping both versions in sync after any changes. Game developers typically don't use those, programming models are different (SIMT kernels vs array computations), it's a maintenance headache, and it was a significant blocker for a wider adoption of data-driven techniques and ML in existing renderers or game engines.


It does! Both platform-specific compute shaders as well as cross-compilation to CUDA. The authors even provide some basic PyTorch bindings to help use existing shader code for gradient computation and backpropagation in ML and differentiable programming of graphics-adjacent tasks: https://github.com/shader-slang/slang-torch (Disclaimer: this is the work of my colleagues, and I helped test-drive differentiable Slang and wrote one of the example applications/use-cases)


It's interestingly disingenuous that many claim of GLP-1 agonist miraculous effects on all kinds of health problems, where the same problems are "simply" solved by getting on a calorie deficit and lean. Liver, kidneys, heart, etc. If you have a non-alcoholic fatty liver disease and are obese, getting leaner will heal it. All those impressive results are on obese or diabetic people. So it is not only not a surprise, but also dishonest marketing or ignorance.

Don't get me wrong - those are miraculous drugs. First real non-stimulant low side effect appetite suppresion that will help millions. But let's wait for honest research on lean people before spreading marketing on how it improves overall health.

Also, how nobody mentions the need for increasing the dosage and tolerance build-up (just check reddits how much people end up having to take after months of continuous use). You cannot be on it "for life".


The increasing dosage is to tritrate up to a dose not because you gain tolerance. There are patients on GLP-1 for over a decade. Also maintenance and weight loss dosages are different: see the dosing charts for ozembic vs wegovy which are exactly the same drug.

Even if folks gain tolerance that doesn’t seem overly concerning. Mental health drugs also have tolerance issues and changing medicines every few years, while it has challenges for the patient, is an accepted part of long term psychiatric treatment.


Just a narrow comment, but type 2 diabetes certainly isn't limited to the obese. Many lean people develop issues with blood sugar that can't be controlled with diet alone.


A friend's son, who is an EMT, was recently diagnosed with type 2 diabetes at the age of 21. He doesn't drink or eat sweets, except on holidays, and works out five days a week. Suddenly, he started feeling sick, was vomiting, and ended up in the ER, all within three days. It can really hit you like a truck.


This is my #1 question on GLP-1: are we just seeing how humans do much, much better by being lean vs. the direct result of the drug?

A lean current-epoch human -- with our food abundance, access to modern medicines, higher standards of life, lower risks of injury, etc -- is likely going to be markedly healthier than a non-lean current-epoch human or a lean human from a prior age where medicine/food/etc was worse.

Or is it, in fact, the direct result of the drug?


> where the same problems are "simply" solved by getting on a calorie deficit and lean

Except that there apparently is mounting evidence that GLP-1 agonists also address some issues that are not generally addressed by just restricting calories. TFA touches on this briefly: "The weight loss involved with GLP-1 agonist treatment is surely a big player in many of these beneficial effects, but there seem to be some pleiotropic ones beyond what one could explain by weight loss alone."

I seem to recall seeing claims that they reduce COVID-19 mortality even controlling for BMI (possibly because they inhibit systemic inflammation), reduce alcohol consumption, and even (though I think just anecdotally) may help overcome gambling addiction.

See, for example:

https://pmc.ncbi.nlm.nih.gov/articles/PMC8425441/ [COVID-19]

https://www.ncbi.nlm.nih.gov/search/research-news/19361/ [Addiction]


I don't know that you have to be disingenuous to both be enthused about these medications AND wish we'd never created the super-processed, super-sugary, make-people-crave-them-and-overeat-them modern American diet. Once you fuck with your gut biome for long enough it's not "simple" to solve it. It's incredibly difficult both discipline and metabolism-wise.


It's not. a) compression can be lossless. b) RAW is not about storing literal photons ADC measurements. It always has "some" processing as those always go through an ISP. We can obviously discuss which processing is the cutoff point and it will differ for different applications, but typically this would include things like clipping, sharpening, or denoising. And even some pro DSLRs would remove row noise or artifacts in supposedly "RAW" files!

If you can change the exposure or WB - it is what is the minimum practical/useful definition of a RAW.


>If you can change the exposure or WB - it is what is the minimum practical/useful definition of a RAW.

No. No it is not at all. Are you a photographer? I am not talking about processing before the photo is saved, I am talking abot the compression of the save file.

Are you trying to tell me that these are the same?

RAW "A camera raw image file contains unprocessed or minimally processed data from the image sensor of either a digital camera, a motion picture film scanner, or other image scanner. Raw files are so named because they are not yet processed, and contain large amounts of potentially redundant data"

JPEG-XL Lossless compression uses an algorithm to shrink the image without losing any IMPORTANT data.


Lossless compression is not about importance of data. Lossless is lossless, if the result of a roundtrip is not EXACTLY IDENTICAL then it is by definition not lossless but lossy.

Maybe you're confusing with "visually lossless" compression, which is a rather confusing euphemism for "lossy at sufficiently high quality".

JPEG XL can do both lossless and lossy. Lossless JPEG XL, like any other lossless image format, stores sample values exactly without losing anything. That is why it is called "lossless" — there is no loss whatsoever.


Yes, I am an (amateur) photographer for the last 27 years, from film, DSLRs, mirror less, mobile. And I worked on camera ISPs - both hardware modules, saving RAW files on mobile for Google Pixel, as well as software processing of RAWs.

But I guess you know better than me. ¯\_(ツ)_/¯


I see you are impressively knowledgeable in this field. But my issue is not with your knowledge, it is with your comprehension and logic.

So, fro example, Nikon typically provides three options for Raw: Compressed, Lossless Compressed and Uncompressed, as seen below:

https://photographylife.com/cdn-cgi/imagedelivery/GrQZt6ZFhE...

"Lossless Compressed means that a Raw file is compressed like an ZIP archive file without any loss of data. Once a losslessly compressed image is processed by post-processing software, the data is first decompressed, and you work with the data as if there had never been any compression at all. Lossless compression is the ideal choice, because all the data is fully preserved and yet the image takes up much less space.

Uncompressed – an uncompressed Raw file contains all the data, but without any sort of compression algorithm applied to it. Unless you do not have the Lossless Compressed option, you should always avoid selecting the Uncompressed option, as it results in huge image sizes."

Why make the distinction if there is no difference?

Apple is COMPRESSING the image. Period. RAW photos can be compressed, but if they are then they are "RAW Compressed" Files, not "RAW" files.Apple is not saying you are shooting RAW Compressed, it says you are shooting ProRAW photos, which is slick marketing because everyone thinks they are shooting RAW photos but ProRAW is not RAW. The iPhone 12 gave you a choice to shoot RAW or ProRAW, but my iPhone 13 ProMax only allows the ProRAW option. I have no option to avoid Apple processing my photos anymore.

It is semantics but words matter. If something is off with the compression algorithm or the processing how would you know?

More, if the difference did not matter, why does Sony go out of the way to explain the difference?

https://www.sony.com/electronics/support/articles/00257081

If a computer compresses and expands the image using an algorithm you are not getting back the same image. Period. I do not care if you perceive it to be the same, it is not the same.


> Why make the distinction if there is no difference?

There is a difference, which is that the compressed lossless version is smaller and requires some amount of processing time to actually be compressed or uncompressed. But there is zero difference in the raw camera data. After decompression, it is identical.

> If a computer compresses and expands the image using an algorithm you are not getting back the same image. Period. I do not care if you perceive it to be the same, it is not the same.

It is the same. You can check each and every bit one by one, and they will all be identical.


Careful, next they’re going to argue that once you copy the raw files off the SD card, they’re not the same images anymore.


If you copy something, by definition, it is not the same file. It is a copy of the file not the original file.

If you copy a Van Gogh is it worth the same as the original?


No, but it’s also a painting instead of a digital file, so different considerations apply (maybe the copy wouldn’t be strictly identical, maybe the value is affected by “knowing that Van Gogh is the one who applied the paint to the canvas” or by the fact that only one such copy exist), and this is therefore a false analogy.

If you copy the number written on a piece of paper to another piece of paper, is it the same number? Yes, it is, and a digital photograph is defined by the numbers that make it up. Once you have two identical copies of a file, what difference does it make which one you read the numbers from?

Or are you arguing that when the camera writes those numbers to the raw file, it’s already a different image than was read from the sensor? After all, they were in volatile memory before a copy was written to the SD card.


Thank you. How can something be different and the same?


One aspect may change (file size) while another remains the same (the actual data you get when you read the file).


It's the other way around - in hearing, phase is almost irrelevant. At medium frequencies, moving head by a few centimeters changes phase wand phase relationships of all frequencies - and we don't perceive it at all! Most audio synthesis methods work on variants of spectrograms and phase is approximated only later (mattering mostly for transients and rapid frequency content changes).

In images, scrambling phase yields a completely different image. A single edge will have the same spectral content as pink/brown~ish noise, but they look completely unlike one another.


Makes sense! My impression that phase matters from audio comes from when editing audio in a DAW or anything like that. We are very sensitive to sudden phase changes (which would be kind of like teleporting very fast from one point to another, from our heads point of view). Our ears kind of pick them up like sudden bursts of white noise (which also makes sense, given that they kind of look like an impulse when zoomed in a lot).

So when generating audio I think the next chunk needs to be continuous in phase to the last chunk, where in images a small discontinuity in phase would just result in a noisy patch in the image. That's why I think it should be somewhat like video models, where sudden, small phase changes from one frame to the next give that "AI graininess" that is so common in the current models


I actually wrote down some thoughts about audio phase in a previous blog post: https://sander.ai/2020/03/24/audio-generation.html#motivatio...

I have an example audio clip in there where the phase information has been replaced with random noise, so you can perceive the effect. It certainly does matter perceptually, but it is tricky to model, and small "vocoder" models do a decent job of filling it in post-hoc.


(not the author) There's techniques for consistent-phase audio synthesis like phase vocoders, but they are beyond my current knowledge.


Seems you have not worked with ML workloads, but base your comment on "internet wisdom", or worse, business analysts (I am sorry if that's inaccurate).

On GPUs, ML "just works" (inference and training) and are always order of magnitude faster than whatever CPU you have. TPUs work very well for some model architectures (old ones that they were optimized and designed for) and on some novel others can be actually slower than a CPU (because of gathers and similar) - this was my experience working on ML stuff as an ML Researcher at Google till 2022, maybe it got better but I doubt. Older TPUs were ok only for inference of those specific models and useless for training. And anything new I tried (fundamental part of research...) - the compiler would sonetimes just break with an internal error, most of the time just produce terrible and slow code, and bugs filed against it would stay open for years.

GPU is so much more than a matrix multiplier - it's a fully general, programmable processor. With excellent compilers, but most importantly - low level access that you don't need to rely on proprietary compiler engineers (like TPU ones) and anyone can develop something like Flash Attention. And as a side note: while a Transformer might be mostly matrix multiplication, many other models are not.


>On GPUs, ML "just works"

If you had worked with ML, you'd know that this is not true. It's actually more like the opposite. It also has nothing to do with the chips themselves. Things don't magically work "because GPU", they work because manufacturers spend the time getting their drivers and ecosystems right. That's why for example noone is using AMD GPUs for ML, despite them offering more compute per dollar on paper. Getting the software stack to the point of Nvidia/CUDA, where things really do "just work", is an enormous undertaking. And as someone who has been researching ML for more than a decade now, I can tell you Nvidia also didn't get these things right in the beginning. That's the reason why they have no real competition today (and still won't for quite some time).


> That's why for example noone is using AMD GPUs for ML

You're right, they are behind, but to say that nobody is using it, is not truthful. AMD HPC clusters are being used [0] and [1] for AI/ML.

The larger issue is that AMD has only been building HPC clusters for the last period of time. Now, with the release of MI300x, we have Azure and Oracle coming online with them now. Disclosure, my business is also building a MI300x super computer as well, with the express goal of enabling more access to developers.

[0] https://defensescoop.com/2023/08/23/navys-new-25m-supercompu...

[1] https://arxiv.org/abs/2312.12705


>AMD HPC clusters are being used [0] and [1] for AI/ML.

Funny how you can immediately tell when the business people made these decisions and not the tech people. This is exactly what I would have expected from an organization like the Navy. On paper it does sound great and the Navy bean counters probably loved this. But they are in for a rough awakening.


As far as I can tell, the only rough awakening is that they paid $25m in 2023, that costs a fraction of that today, for even better performance.

In a few months, my own cluster will be nearly 2x that size, with better networking, and we aren't spending anywhere near $25m.

Disclosure: building my own supercomputer business around AMD hardware


The best I can say is that my thoughts and prayers go to the ML engineers who will actually have to deal with this. Those companies literally couldn't pay me enough to put up with it. They will likely only attract people who care about the salary and the position instead of getting things done. I've seen it with other colleagues before. These numbers of yours are completely worthless without someone who is willing to put in 5 times the work for the same or worse results.


People choose jobs and tools for a variety of reasons. I don't feel the need to cast judgement on them over it.

The numbers I gave aren't worthless, nor does it take 5x the amount of work. I also don't think that going with a single source for hardware for all of AI is very smart either, especially given the fact that there are serious supply shortages from that single vendor. No fortune 100 would put all their eggs in one basket and even if it was 5x the work, it is worth it.


Probably bartwr is using "GPUs" to mean NVIDIA GPUs. Seeing as nobody uses AMD GPUs for it, that simplification seems OK.


Hey, this is a good comment. I've only toyed with ML stuff, but I've done a lot with GPUs. I hope you can find my "step back" perspective as valuable I find your up close one.

My chief mistake in the above comment was using "TPU", as that's Google's branding. I probably should've used "AI focused co-processor". I'm not talking exclusively about Google's foray into the space, especially as I haven't used TPUs.

My list of things to ditch on GPUs doesn't include cores. My point there is that there's a bunch of components that are needed for graphics programming that are entirely pointless for AI workloads, both inside the core's ALU and as larger board components. The hardware components needed for AI seem relatively well understood at this point (though that's possible to change with some other innovation).

Put another way, my point is this: Historically, the high end GPU market was mostly limited to scientific computing, enthusiast gaming, and some varied professional workloads. Nvidia has long been king here, but with relatively little attempt by others at competition. ML was added to that list in the last decade, but with some few exceptions (Google's TPU), the people who could move into the space haven't. Then chatGPT happened, investment in AI has gone crazy, and suddenly Nvidia is one of the most valuable companies in the world.

However, The list of companies who have proven they can make all the essential components (in my list in the grandparent) isn't large, but it's also not just Nvidia. Basically every computing device with a screen has some measure of GPU components, and now everyone is paying attention to AI. So I think within a few years Nvidia's market leadership will be challenged, and they certainly won't be the only supplier of top of the line AI co-processors by the end of the decade. Whether first mover advantage will keep them in first place, time will tell.


ML doesn't just work on GPUs. It's not uncommon to have architectures where GPUs don't really work, we just tend not to use those :)


Also, it's disingenuous to say "there's only 4 things you need to beat NVIDIA" when each of the 4 is an enormous undertaking.


not to mention every not-so-serious, inference heavy ML developers just want something to work to deliver to client. That itself is a semi-moat.


It's been talked to death but non-CUDA implementations have their challenges regardless of use case. That's what first-mover advantage and > 15 years of investment by Nvidia in their overall ecosystem will do for you.

But support for production serving of inference workloads outside of CUDA is universally dismal. This is where I spend most of my time and compared to CUDA anything else is non-existent or a non-starter unless you're all-in on packaged API driven Google/Amazon/etc tooling utilizing their TPUs (or whatever). The most significant vendor/cloud lock-in I think I've ever seen.

Efficient and high-scale serving of inference workloads is THE thing you need to do to serve customers and actually have a chance at ever making any money. It's shocking to me that Nvidia/CUDA has a complete stranglehold on this obvious use case.


A great summary of how unserious NVIDIA's competitors are is how long it took AMD's flagship consumer/retail GPU, the 7900 XT[X], to gain ROCm support.

That's quite literally unacceptable.


For those who don't know - one year after launch.

Meanwhile Nvidia will go as far as to back port Hopper support to CUDA 11.8 so it "just runs" the day of launch with everything you already have.


Yes, my experience with academics is that there are a lot of very dishonest people. They are political bullies who also lie in their research.

Chances of being caught are close to zero (I have contacted many times authors of papers who's work I was unable to replicate - most of the time zero reply, sometimes "yeah it was a honest mistake, oops"), super high competition (only a few tenured positions in all world's high visibility institutions per year), full control over student's future and being able to force them to do fraud (and later blame on them).

Obviously, not all, blah blah - but many academic scientists are the last people that should be doing science.


I know this is an unpopular opinion in the US, but the tenements can be pretty great.

I grew up in Eastern Europe (Warsaw) in "commie" blocks and there was a lot of valid criticisms and problems (like poor quality of buildings, small apartments, or thin walls - but consider that they rebuilt whole Warsaw after it was completely grounded in WW2 in a decade or two!), but also a lot to love. Extremely walkable, safe, all amenities (cinemas, stores, cultural centers, playgrounds) in the walking distance, lots of trees and green, easy access to public transit. As a kid or teenager they were great. I preferred it 100x over suburbs where my parents moved later, and to typical American cityscapes. (And this is why I moved to NYC and love it)

Here is a fun and a bit provocative/exaggerated video https://youtu.be/1eIxUuuJX7Y

Everyone is different so I'm not forcing my perspective onto anyone, just worth considering - especially if you have not had such first hand experience (and the main objection to tenements comes from how depressing they look or American association of "projects = crime", which misses a lot of "why"). Feel free to disagree!

And apart from that, I don't think "more of small houses" solves anything. It has to create more car dependence and social isolation. And it does not really scale, where would you fit more of smaller homes in SF?


"Everyone is different"

I think this is the root of it. You prefer higher density -- and that's great. I'm sure not everyone agrees, but I don't see any reason to take that away. In fact, I think it should be encouraged for those that like it.

The issue, IMHO, is that some folks don't like that and prefer lower density. And a lot of these changes focus on taking that away from them (i.e. changing their current neighborhood).

Also, just a comment on: "It has to create more car dependence and social isolation"

I don't think that's true. I live in a pretty traditional SFH neighborhood. Within a 12 minute bike ride, I have:

* Four grocery stores (Major chains)

* 2 gyms

* Dozens of restaurants

* Several large parks

* 2 home improvement stores

* Several large employers

* Several (non-Starbucks) coffee shops

And lots more. It's certainly possible, with bikes, to have SFH neighborhoods where cars aren't required.


There are lots of less dense places in the US that are not major metros.

Many of the people who are pissed about more density coming to cities only moved to them in the last decade or two, especially on the east coast where white flight only recently reversed.


I lived in SF for ~1.5y, and it's not NYC, and I did not like it too much, but it certainly has some of the city conveniences and is not a car-hell suburb. (I lived in a building with ~10 units in Castro, which was cool)

But my question remains - how do you scale up your approach to the already-full SF? How do you make it more affordable, as prices are insane due to demand >> supply? Or do you just envision a more sprawled, but similarly dense SF as the solution?


I enjoyed the post and appreciate the author sharing their perspective. It's one of many valuable datapoints for anyone considering such a transition.

I agree with a lot, but like others - disagree with some.

My background: I've worked in tech for ~14 in all kinds of roles - from pure junior IC, through a "team lead" (something between an expert IC, a tech lead, and a manager), "tech lead," company's "technical architect" (highest level tech lead, peer to the technical director, but without any direct reports), and something akin to a tech director. Now I'm back to IC. Companies from small gamedev ones (80 people total, 15 engineers), medium gamedev (30 engineers and coding technical artists), huge gamedev (Ubisoft where you can have 100+ engineers and 1500 people total on a project), and for the last 7y "big tech".

The idea I would like to push back the most is that "your words have more weight". I have never had trouble getting my opinions heard, even if I didn't push them. Being the expert IC and "problem solver" sometimes I'd have even CEO asking me directly for advice and how to solve some issues (both technical and non-technical!). Not always following them, but I didn't expect that. Having an official title, in theory, you can use some "authority" to formally push those ideas. But... in practice, it does not work better. People will still go directly to the most technical experts. And if you abuse the position/authority/title (I hope I never did that, but that's not for me to judge...), it can cause resentment, pushback, and more disagreements. You will also hear less of gossip and honest feedback, to some engineers you becomes "not one of us anymore".

It can also destroy friendships. I had a great friend (meeting socially with our wives once a week, sharing interests) who was my peer and was then promoted to my lead. I still really liked them and wanted to stay friends. We always had technical disagreements (which were fine for peer ICs). Later, some of those technical disagreements and my bringing up issues publicly caused him to get bitter with me (as the upper management saw those and took my side on some occasions), and eventually stopped the friendship completely; after I left the company, they started ghosting me. :(

Similarly, on a few occasions, I agreed to lead/manage formally (in one case, it came from me - in other cases, I was asked to). I agreed because I thought, "Things are f-d up; I can solve them by being closer to the upper leadership and helping the team succeed." Man, I was so naive. :( I didn't have any more authority or power with the higher-ups, and there were more disagreements. They expected me to enforce policies I disagreed with. As you can imagine, this didn't last long, and I always ended up leaving the team/company and being super burnt out.

So now I'm happy to be a staff-level IC, an expert, and a "hacker," playing with problems hands-on and building my expertise further. The field grows so quickly that there is always something exciting and new to learn and do. I would happily be a tech lead of some project close to me (luckily, at Google and similar, it's flexible, per-project, and not formal), but I probably do not want to manage again. Maybe it will change, depends.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: