Hacker Newsnew | past | comments | ask | show | jobs | submit | dosshell's commentslogin

> I have to have TikTok for work

I'm sorry but what? Your job demands what apps you have installed on your PRIVATE phone!?


Well, nobody's forced it, but my company publishes content on TikTok that drives customers, and I want to be able to see it myself. You'd be surprised how many CISOs and security workers are on TikTok.

Edit: "experts" > "workers"


Tiktok.com

?


I would assume for advertising/business account. There are things you can only do on the TikTok app that you can't do on the web.

All jobs I've had since the mid 2010s essentially did the same for me by requiring 2fa in certain contexts

What kind of 2FA? I run OTP on my work laptop. Yes, it's maybe not really a 2nd factor if someone had access to my laptop with LUKS open. But at least I don't expect any automated attack because it's my own piece of code using an otp library.

One of the contexts is login to the laptop , would be pretty challenging to facilitate on device ;)

Sadly, biometric authentication as 2fa is not sufficient for that.


Same here. If someone is accessing my OTP codes from my laptop, I've got bigger problems to worry about.

Only my most recent job is doing this. Before the job provided a phone for 2FA that I didn't use much outside of that.

>> because part of what’s holding Zig back from doing async right is limitations and flaws in LLVM

this was interesting! Do you have a link or something to be able to read about it?


Much of the discussion is buried in the various GitHub issues related to async. I found something of a summary in this Reddit comment

https://www.reddit.com/r/Zig/comments/1d66gtp/comment/l6umbt...


iirc the llvm async operation does heap allocations?


Note that: There are no economic science Nobel prize.

Only one similar named price in the name and memory of Alfred Nobel, which some how, is allowed to be part of the Nobel prize celebration.

I guess my opinion is in minority, but i don't like that another prize hijacks the Nobel prize.


> I can get away with a smaller sized float

When talking about not assuming optimizations...

32bit float is slower than 64bit float on reasonable modern x86-64.

The reason is that 32bit float is emulated by using 64bit.

Of course if you have several floats you need to optimize against cache.


Um... no. This is 100% completely and totally wrong.

x86-64 requires the hardware to support SSE2, which has native single-precision and double-precision instructions for floating-point (e.g., scalar multiply is MULSS and MULSD, respectively). Both the single precision and the double precision instructions will take the same time, except for DIVSS/DIVSD, where the 32-bit float version is slightly faster (about 2 cycles latency faster, and reciprocal throughput of 3 versus 5 per Agner's tables).

You might be thinking of x87 floating-point units, where all arithmetic is done internally using 80-bit floating-point types. But all x86 chips in like the last 20 years have had SSE units--which are faster anyways. Even in the days when it was the major floating-point units, it wasn't any slower, since all floating-point operations took the same time independent of format. It might be slower if you insisted that code compilation strictly follow IEEE 754 rules, but the solution everybody did was to not do that and that's why things like Java's strictfp or C's FLT_EVAL_METHOD were born. Even in that case, however, 32-bit floats would likely be faster than 64-bit for the simple fact that 32-bit floats can safely be emulated in 80-bit without fear of double rounding but 64-bit floats cannot.


I agree with you. It should take the same time when thinking more about it. I remember learning this in ~2016 and I did performance test on Skylake which confirmed (Windows VS2015). I think I remember that i only tested with addsd/addss. Definitely not x87. But as always, if the result can not be reproduced... I stand corrected until then.


I tried to reproduce it on Ivybridge (Windows VS20122) and failed (mulss and muldd) [0]. single and double precision takes the same time. I also found a behavior where the first batch of iterations takes more time regardless of precision. It is possible that this tricked me last time.

[0] https://gist.github.com/dosshell/495680f0f768ae84a106eb054f2...

Sorry for the confusion and spreading false information.


Sure, I clarified this in a sibling comment, but I kind of meant that I will use the slower "money" or "decimal" types by default. Usually those are more accurate and less error-prone, and then if it actually matters I might go back to a floating point or integer-based solution.


I think this is only true if using x87 floating point, which anything computationally intensive is generally avoiding these days in favor of SSE/AVX floats. In the latter case, for a given vector width, the cpu can process twice as many 32 bit floats as 64 bit floats per clock cycle.


Yes, as I wrote, it is only true for one float value.

SIMD/MIMD will benefit of working on smaller width. This is not only true because they do more work per clock but because memory is slow. Super slow compared to the cpu. Optimization is alot about cache misses optimization.

(But remember that the cache line is 64 bytes, so reading a single value smaller than that will take the same time. So it does not matter in theory when comparing one f32 against one f64)


This is very interesting! Are there any movements to move towards this?

Wouldn't it open up for a new attack vector where process could read each other data?


I agree with you, hidden is worse.

But we do know what it can not static link to, any GPL library, which many indirect dependencies are.


I think you mean the LGPL? It allows you to "convey a combined work under terms of your choice" as long as the LGPL-covered part can be modified, which can be achieved either via dynamic linking or by providing the proprietary code as bare object files to relink statically. The GPL doesn't have this exception.


If static and dynamic libraries use the same interface, shouldn't they be detectable in both cases? Or is it removed at compile time?


First IANACC (I'm not a compiler programmer), but this is my understanding:

What do you mean by interface?

A dynamic library is handled very different from a static one. A dynamic library is loaded into the process virtual memory address space. There will be a tree trace there of loaded libraries. (I would guess this program walks this tree. But there may be better ways i do not know of that this program utilize)

In the world of gnu/linux a static library is more or less a collection of object files. The linker, to my best knowledge, will not treat the content of the static libraries different than from your own code. LTO can take place. In the final elf the static library will be indistinguishable from your own code.

My experience of the symbole table in elf files is limited and I do not know if they could help to unwrap static library dependencies. (A debug symbol table would of course help).


I know this is maybe not the answer you want, but if you are just interested in getting the job done there exist companies that are experts on this, for example:

https://fortune.com/2024/03/11/adaptive-startup-funding-falc...


Also interested in this. Does this task really require such specialized knowledge?


The first thing that is required is to define what they are trying to do. In other words, list some question and answer examples. It's amazing how many people are unwilling or unable to do this and just jump to "we need to train a custom model". To do what exactly, or answer what kinds of questions? I have actually had multiple clients refuse to do that.


Very good point. I totally agree with you.


One problem I encounter with math wiki is that I almost need to know what it is before reading to understand the wiki page.

I think wikibooks is a good initiativ to solve this, and could be powerful when combined with a normal wiki.


> almost need to know what it is before reading to understand the wiki page

There is a project page advocating more accessible technical articles, https://en.wikipedia.org/wiki/Wikipedia:Make_technical_artic...

In some cases technical subjects just require some pretty steep prerequisite knowledge, but where possible it's nice to try to make them as accessible as can be done practically within the space constraint of a few introductory paragraphs. Usually that means trying to aim at least part of any article at approximately "1 level below" the level where students are expected to first encounter the topic in their formal study. (This isn't always accomplished, and feel free to complain on specific pages that fall far short.)

Writing for a extremely diverse audience with diverse needs is a hard problem. And more generally, writing well as a pseudonymous volunteer collective is really hard, and a lot of the volunteers just aren't very good writers. Then some topics are politicized, ...

How much time have you personally spent trying to make technical articles whose subjects you do know about more accessible to newcomers? If anyone reading this discussion has the chance, please try to chip away at this problem, even if it's just contributing to articles about e.g. high school or early undergraduate level topics – many of these are not accessible at the appropriate level. But if you are an expert about some tricky technical topic in e.g. computing or biology or mechanical engineering, go get involved in fixing it up.


I’m sure it is totally impossible because figuring out where to start (what’s “obvious” to the reader), but a wiki that also has some sort of graph and could work out the dependencies for a given theorem, what you need to know to understand it, and then a couple applications (for examples) could be really useful. Automatic custom textbook on one specific topic.


Look up Abstract Wikipedia.

https://meta.wikimedia.org/wiki/Abstract_Wikipedia

It's more or less Wikipedia but the articles are created using natural language generation on a functional programming base. The main goal is to generate content in any language from a common underlying structure, but one could also try recursive explanations of a given topic in that framework as well.


> One problem I encounter with math wiki is that I almost need to know what it is before reading to understand the wiki page.

Case in point, nLab: https://ncatlab.org/nlab/show/HomePage

For instance, https://ncatlab.org/nlab/show/homotopy+type+theory

Although this is partly inevitable because the content is really abstract, I know there are more approachable ways to define “monad” than https://ncatlab.org/nlab/show/monad


Yes Wikipedia is really bad for maths articles. They're all written by people who just learnt about the topic and are showing off their pedantically detailed knowledge of it.

I recommend Mathworld. Much much better.


A benefit of knowing a non-english language is that my native language wiki entry usually is a good tldr of the english one.

Many of the english math entries seem to be written for math students (as in math program students, not students studying math).


I think you have typo in your url.


Thanks for the heads up, there was an SSL issue on Cloudflare, should be fixed now :)


I'm not be able to tell pixellabs ai art apart from professional painted art.

Of course the best result is when the tool is combined with a good artist.

I'm 100% sure the price for game art will drop significantly, if it already hasn't.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: