It will spin up a localhost server after the trace ends, the profiler uses the localhost server and nothing is shared with Firefox servers unless you explicitly choose to upload the data and create a permalink.
I strongly recommend not using this. Instead use pprof - it has a MUCH better interactive flamegraph, plus other nice performance visualisations (e.g. a call graph):
I've been using DataGrip for a few weeks and admit it is a nice upgrade to DBVisualizer that I've been using for 10 years. The intellisense and features like being able to select the query in the current window are big time savers for me. I'm still on a trial and not certain I'll purchase it just because things are moving so fast in this field. I feel like not having it in my VSCode Agent loop is a huge negative at this point
Datagrip, as an extension, lets you work with SQL, highlighting, autocompletion, and more, inside non-SQL files, such as your programming language files. I think they call this 'language injection'.
Performance optimization and being able to "plug in" to the data ecosystem that Apache Arrow exists in.
OpenTelemetry is pretty great for a lot of uses, but the protocol over the wire is too chunky for some applications where. From last year's post on the topic[0]:
> In a side-by-side comparison between OpenTelemetry Protocol (“OTLP”) and OpenTelemetry Protocol with Apache Arrow for similarly configured traces pipelines, we observe 30% improvement in compression. Although this study specifically focused on traces data, we have observed results for logs and metrics signals in production settings too, where OTel-Arrow users can expect 50% to 70% improvement relative to OTLP for similar pipeline configurations.
For your average set of apps and services running in a k8s cluster somewhere in the cloud, this is just a nice-to-have, but size on wire is a problem for a lot of systems out there today, and they are precluded from adopting OpenTelemetry until that's solved.
Not sure, but seems like it will be producing apache arrow data and carrying it across the data stack end to end from OTEL. This would be great for creating data without a bunch of duplication/redundant processing steps and exporting it in a form that's ready to query.
Unless I dont understand that fully (which could be the case).
This idea could fly if downstream readers will be able to read it. Json is great because anything can read it, process, transform and serialize without having to know the intrisics of the protocol.
Whats the point of using binary, columnar format for data in transit?
Arrow is an in-memory columnar format, kinda orthogonal to parquet (which is an at-rest format). Protobuf is a better comparison, but it's more message oriented and not suited for analytics.
Everyone in startups should be a fan of this. More competition in government space is a net benefit for everyone.
Its quite funny to see the comments complaining about "lowering the bar" when FedRAMP compliance is essentially a compliance regime that is so convoluted that most startups wouldn't be able to afford the entry barrier.
Now, there is a chance that a smaller vendor could feasibility compete with a massive consultancy like Accenture since the artificial barriers have been decreased.
FedRAMP compliance are also required for SaaS vendors. Datadog is famous for having it (and it took them awhile)
I've tried hard to remove phthalates from my life. The biggest change that I feel is sustainable is looking for "hard" plastics. Usually phthalates are found in flexible, soft plastics. So hard plastics typically have less of them.
This is supposed to be used in place of CUDA, HIP, Metal, Vulkan, OpenGL, etc... It's targeting the hardware directly so doesn't need to be supported as such.
The site also seems to clearly state it's a work in progress. It's just an interesting blog post...
They also miss that on CUDA's case it is an ecosystem.
Actually it is C, C++, Fortran, OpenACC and OpenMP, PTX support for Java, Haskell, Julia, C#, alongside the libraries, IDE tooling and GPU graphical debugging.
Likewise Metal is plain C++14 plus extensions.
On the graphics side, HLSL dominates, following by GLSL and now slang. There are then MSL, PSSL and whatever NVN uses.
By the way, at GTC NVIDIA announced going all in with Python JIT compilers for CUDA, with feature parity with existing C++ tooling. There is now a new IR for doing array programming, Tile IR.
By this logic, most software engineers would also be considered high risk since they work a sedentary job and have higher risks for heart disease and obesity (which likely leads to higher healthcare costs over the long term)
Wow, the smugness of that reply. Responding by calling someone naive and blowing them off despite there being real questions.
The “insecure crypto “ that they clearly link to (despite not wanting to put them on blast) was also a bit overdone.
I guess we all are stuck hiring this expert to review our crypto code(under NDA of course) and tell us we really should use AWS KMS.
There are some weird attacks against KMS that I think are possible that are not obvious. For example KMS has a mode where it will decrypt without supplying a key reference (suspicious!). If an attacker can control the cipher text then they can share a KMS key from their AWS account to yours and then control the plaintext. I haven’t confirmed this works so maybe my understanding is incorrect.
Also, with KMS you probably should be using the data key API but then you need some kind of authenticated encryption implemented locally. I think AWS has SDKs for this but if you are not covered by the SDK then you are back to rolling your own crypto.