Hacker Newsnew | past | comments | ask | show | jobs | submit | SomaticPirate's commentslogin

How was the flame graph created? (Not very familiar with C and the performance tools around it)


You can also use https://github.com/mstange/samply to make recording and viewing in the Firefox profiler easier.

It will spin up a localhost server after the trace ends, the profiler uses the localhost server and nothing is shared with Firefox servers unless you explicitly choose to upload the data and create a permalink.


https://github.com/brendangregg/FlameGraph

You record performance data with `perf`, then use the scripts there to turn it into a SVG.


I strongly recommend not using this. Instead use pprof - it has a MUCH better interactive flamegraph, plus other nice performance visualisations (e.g. a call graph):

https://github.com/google/pprof

    go install github.com/google/pprof@latest
    pprof -http=: prof.out
I normally collect the profiles with gperftools (https://github.com/gperftools/gperftools) and then just

    LD_PRELOAD=/usr/lib/libtcmalloc_and_profiler.so CPUPROFILE=prof.out <your command>
I've been meaning to try Samply though. Not sure if it works with pprof.


OP here. In this particular case, I used https://github.com/flamegraph-rs/flamegraph


I still think Jetbrains has the gold standard in IDE - Database interaction


I've been using DataGrip for a few weeks and admit it is a nice upgrade to DBVisualizer that I've been using for 10 years. The intellisense and features like being able to select the query in the current window are big time savers for me. I'm still on a trial and not certain I'll purchase it just because things are moving so fast in this field. I feel like not having it in my VSCode Agent loop is a huge negative at this point


Interesting in knowing why you think that


Because of a rich feature set and amazing integration with DB providers.

Good starting point: https://www.jetbrains.com/pages/intellij-idea-databases/


It even lints your SQL queries written in other languages. Truly gold standard.


And autocompletes, syntax highlight it. I couldn't imagine being without this.


Datagrip, as an extension, lets you work with SQL, highlighting, autocompletion, and more, inside non-SQL files, such as your programming language files. I think they call this 'language injection'.


Wow, anyone able to provide a ELI5? OTel sounds amazing but this is flying over my head


Warning: this is an oversimplification.

Performance optimization and being able to "plug in" to the data ecosystem that Apache Arrow exists in.

OpenTelemetry is pretty great for a lot of uses, but the protocol over the wire is too chunky for some applications where. From last year's post on the topic[0]:

> In a side-by-side comparison between OpenTelemetry Protocol (“OTLP”) and OpenTelemetry Protocol with Apache Arrow for similarly configured traces pipelines, we observe 30% improvement in compression. Although this study specifically focused on traces data, we have observed results for logs and metrics signals in production settings too, where OTel-Arrow users can expect 50% to 70% improvement relative to OTLP for similar pipeline configurations.

For your average set of apps and services running in a k8s cluster somewhere in the cloud, this is just a nice-to-have, but size on wire is a problem for a lot of systems out there today, and they are precluded from adopting OpenTelemetry until that's solved.

[0]: https://opentelemetry.io/blog/2024/otel-arrow-production/


Not sure, but seems like it will be producing apache arrow data and carrying it across the data stack end to end from OTEL. This would be great for creating data without a bunch of duplication/redundant processing steps and exporting it in a form that's ready to query.


Unless I dont understand that fully (which could be the case).

This idea could fly if downstream readers will be able to read it. Json is great because anything can read it, process, transform and serialize without having to know the intrisics of the protocol.

Whats the point of using binary, columnar format for data in transit?


better compression https://opentelemetry.io/blog/2023/otel-arrow/

You don't do high performance without knowing the data schema.


Is Arrow better than Parquet or Protobuf?


Arrow is an in-memory columnar format, kinda orthogonal to parquet (which is an at-rest format). Protobuf is a better comparison, but it's more message oriented and not suited for analytics.


Not having to write to disk is great, and zero-copy in memory access is instant...


the blog post comparison is against OTLP which is protobuf



A bit hand wavy.


Everyone in startups should be a fan of this. More competition in government space is a net benefit for everyone.

Its quite funny to see the comments complaining about "lowering the bar" when FedRAMP compliance is essentially a compliance regime that is so convoluted that most startups wouldn't be able to afford the entry barrier.

Now, there is a chance that a smaller vendor could feasibility compete with a massive consultancy like Accenture since the artificial barriers have been decreased.

FedRAMP compliance are also required for SaaS vendors. Datadog is famous for having it (and it took them awhile)


I've tried hard to remove phthalates from my life. The biggest change that I feel is sustainable is looking for "hard" plastics. Usually phthalates are found in flexible, soft plastics. So hard plastics typically have less of them.


What do you mean? This looks like open-source


Only the app, not the server and by all indications it won't be free.


While I admire the work of hobbyists it still looks like C/C++ will be the default until a GPU vender makes the decision to support these libraries.

From my understanding, Vulkan and OpenGL are nice but the true performance lies in the specific toolkits (ie CUDA, Metal).

Wrapping the vendor provided frameworks is liable to break and that isn't tenable for someone who wants to do this on a professional basis.


I don't quite get this comment.

This is supposed to be used in place of CUDA, HIP, Metal, Vulkan, OpenGL, etc... It's targeting the hardware directly so doesn't need to be supported as such.

The site also seems to clearly state it's a work in progress. It's just an interesting blog post...


They also miss that on CUDA's case it is an ecosystem.

Actually it is C, C++, Fortran, OpenACC and OpenMP, PTX support for Java, Haskell, Julia, C#, alongside the libraries, IDE tooling and GPU graphical debugging.

Likewise Metal is plain C++14 plus extensions.

On the graphics side, HLSL dominates, following by GLSL and now slang. There are then MSL, PSSL and whatever NVN uses.

By the way, at GTC NVIDIA announced going all in with Python JIT compilers for CUDA, with feature parity with existing C++ tooling. There is now a new IR for doing array programming, Tile IR.


The Zig compiler can compile C, though.


By this logic, most software engineers would also be considered high risk since they work a sedentary job and have higher risks for heart disease and obesity (which likely leads to higher healthcare costs over the long term)


I wonder how those compare to something like this: https://developers.google.com/optimization/service/schedulin...


Wow, the smugness of that reply. Responding by calling someone naive and blowing them off despite there being real questions.

The “insecure crypto “ that they clearly link to (despite not wanting to put them on blast) was also a bit overdone. I guess we all are stuck hiring this expert to review our crypto code(under NDA of course) and tell us we really should use AWS KMS.


AWS KMS is great product branding. I've never seen another company so accurately capture how it feels to use their product with just the name before.


It's also just a profoundly good product. If you can use KMS, you should.


Always be suspicious of any acronym with a ‘K’ in it, just on general principle.


There are some weird attacks against KMS that I think are possible that are not obvious. For example KMS has a mode where it will decrypt without supplying a key reference (suspicious!). If an attacker can control the cipher text then they can share a KMS key from their AWS account to yours and then control the plaintext. I haven’t confirmed this works so maybe my understanding is incorrect.

Also, with KMS you probably should be using the data key API but then you need some kind of authenticated encryption implemented locally. I think AWS has SDKs for this but if you are not covered by the SDK then you are back to rolling your own crypto.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: