Hacker News new | past | comments | ask | show | jobs | submit login

Sorry for the delay, I'm probably not the best person to answer this and I know just enough on the subject to be dangerous so with that in mind I'll give it a shot.

>Is if you have bare metal operations do you still have access to kernel functionality ala stdio and libc libraries? Normally when you hit bare metal your on your own.

That's just it, your process is just another Linux process. The difference is that the scheduler will put it on lets say core 2 but everything else has an affinity for core 1 and interrupts will also be handled by core 1 meaning your application is never interrupted on core 2. You still get every feature that you normally get in Linux.

>Also if you can call these functions like your in userland then do they block until execution has completed on the other 'kernel' cores?

You are in userland, normal userland. Implementation details of syscalls are black magic as far as I'm concerned so take this with a grain of salt but apart from kthreads the kernel isn't running in some other thread waiting for a syscall to service, a syscall is just your program calling int 80 which jumps into the interrupt handler in kernel mode on the same core that was just running int 80, does it's work figuring out what syscall you're making and finishes handling the interrupt. So basically yes your thread "blocks" while the syscall is in progress on your special isolated core, not core 1 like everything else running on the system.

>Also if you creating P-Threads elsewhere but not managing their execution on the 'non-kernel' core what happens?

I'm not entirely sure what you mean by this, specifically "managing their execution on the 'non-kernel' core". It's just a thread like a normal Linux thread, but at first a new thread is going to have an affinity for only core 1 which you can change to core 2.

>Can you give any literature on this? these terms are contradictory.

What I meant was that generally if you want to do certain low level things like talk directly to hardware you need to be running in kernel mode. But really you don't need to be in kernel mode all of the time, just initially to allow normal user mode code to talk to the hardware instead of having to use the kernel like one big expensive proxy. As for why a user mode driver for a network card would be such a huge performance gain there are a number of reasons such as every syscall will be a context switch, whatever data you're sending or receiving to the network card will need to be needlessly copied to/from the buffer instead of reading and writing directly to it, you have to go through the entire Linux TCP/IP stack when there's tons of functionality in there that you might not need but have to have so it's just wasted cycles, and the list goes on.

I did manage to find an old Hacker News comment on the subject for further reading from someone much more well versed on the topic than I am. https://news.ycombinator.com/item?id=5703632

Also of interest might be Intel's DPDK which is basically what we're talking about, moving the data plane out of the kernel completely for extreme scalability.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: