Does anyone have pointers on where to start with actual embedded programming? I have a couple Arduinos and RPis laying around, but I'm wondering if there are more 'real' ways to do it.
I have had a lot of fun following Ben Eater's projects, which aren't always embedded-specific (sometimes they're TTL, sometimes Arduino) but are excellent for understanding concepts deeply.
I tend to learn best with a specific project that can grow or morph as my interest or experience dictates. You might find something to build with an Arduino, using the toolchain/IDE/libraries, get it working and then start stripping out libraries for your own implementations, or getting a toolchain of your own to cross-compile and flash.
How could one get the safety promises that are observed in Rust in C?
C is a language that doesn't come with many guarantees. I personally like to think of C as a 'higher-level assembler', targeting a virtual machine. I've been led to believe that this figurative description of the language was more common in the past than it is today. I find it a helpful description since it offers an explanation for many of C's design choices. Such as it's weak types and use of pointers. If I'm correct it's also an accurate description of the language's original aims in system development.
Also, Rust isn't the only systems programming language with a focus on safety. Ada has been around for some time now and is a much more mature language and arguably more suited for the job. It has a demonstrable track record of successful use in safety-critical software. Rust is definitely more 'C-like' than Ada, which might make it preferable to many.
I always thought that after so many years, there must be a testing framework, development tools and methodology to give a C developer the safety that his problem requires. What do people use when they are programming critical systems e.g. defense,health,flight control, etc. Problems like Heartbleed et al are not something that can be ignored in the industry.
That is why I wondered about advanced tools I heard about e.g. ATS, Compcert, and so on. As I understand, the model that is used in Rust comes with limitations in regard to program design.
I actually have built the clock module from Ben Eater with the intent of building the 6502 computer project at some point in the future. I really like his stuff.
* Michael Pont's Embedded C and Patterns for Time-Triggered Embedded Systems (PTTES). They are chock full of invaluable C code (for 8051); in particular, beg, borrow or steal PTTES (free pdf available). Also checkout his other books and his company SafeTTy Systems.
* Make: AVR programming by Elliot Williams teaches you to directly program the ATmega328P on a Arduino Uno.
* Introduction to Embedded Systems: Using Microcontrollers and the MSP430 by Jimenez, Palomera et al. is an excellent textbook explaining each hardware aspect of an embedded system and how to program them.
Note: All the above are for bare-metal embedded programming. For Embedded Linux on RPi, i suggest Exploring Raspberry Pi: Interfacing to the Real World with Embedded Linux by Derek Molloy.
For this part, it's also fun to have a logic analyzer (starts at about 10 bucks) to see the change in code manifest on physical pins. It's also helpful to see that what you think you are doing, is actually happening. Eg, the SPI Chip select pin may be inverted (high when should be low and vice versa).
Then start off with a simple program that does init, and periodically read the sensor. Perhaps adding thresholds that trigger eg a LED. Then you can extend this to pipe over serial port to the rpi and push it to some server of your choice (eq mqtt), or display on a local webserver dashboard.
Go with sensors that are ordinary SPI or I2C, not some one-wire-protocol. Suggestions, BMP180 (temp, RHum), TSL2561 (light).
edit: if you are doing it on the Arduino, you can start off with the arduino spi/i2c libs, and later on if you wish, fire up the AVR datasheet (or whatever cpu is on your arduino) and implement i2c/spi yourself by changing registers etc on the cpu.
I'd recommend getting a dev kit like the STM32F4DISCOVERY (https://www.st.com/en/evaluation-tools/stm32f4discovery.html). ST Micro's boards are often used for courses (https://www.udemy.com/course/cortex-m/) so you may like to take some of those courses. You'll often hear about the TI MSP430 as another microcontroller but AFAIK it's beginning to be a bit dated. Although come to think of it, there's probably more educational material out there for it, if you're willing to search.
Grab a kit like the Sparkfun Beginner's Kit (https://www.sparkfun.com/products/13973) and read some of the tutorials on their website about creating circuits. Tutorials or courses for your dev kit should get you to a point where you can light an LED controlled by the micro.
From there, you may like to do more advanced stuff like communicating with sensors over specific protocols (Sparkfun's Tinker Kit and associated guides may be of use https://www.sparkfun.com/products/14556 though you will have to translate from Arduino to C code, which can be good practice for knowing how Arduino works under-the-hood).
At this point, you'll probably know whether you want to keep learning more about sensors/lights/IoT type stuff, or want to branch out to other embedded-related topics. More advanced IoT material will be things like taking sensor measurements, storing measurements to memory, interfacing with displays, sending data via WiFi or Bluetooth.
Edit: I skimmed over a lot to keep it short. There's a lot hiding behind how casually these recommendations are made, so feel free to reach out with any questions (email in profile).
I'm a Python/Julia developer starting to learn C. I have K&R already, and Test Driven Development for Embedded C (Grenning).
I did order 'Modern C' by Gustedt but the publisher never delivered to Waterstones so they had to cancel the order (about 6 months ago, book still unavailable from Waterstones as of today).
* How to identify and handle undefined behavior in a C program
* The range and representations of integers and floating-point values
* How integer promotions are performed and how they may affect portability
I think it's incredibly important to understand how numbers on computers work, what are the limits of 32bit and 64bit values and how double/floats play into it.
* The order of evaluation of expressions
Most coding styles avoid ambiguity by just using (enough parentheses) around (important statements).
* Dynamic memory allocation including the use of non-standard functions
Non-standard worries me here. Memory allocation isn't particularly
* The philosophy underlying the use of character types in C
Character encodings and types
I'm curious as to what this is covering? Is it system specific?
* How to perform input/output (I/O) with terminals and file systems using C Standard Streams and POSIX file descriptors
* The translation phases implemented by the C compiler and the role of the preprocessor
* How to test, debug, and analyze C programs
IO is an interesting topic, but I suspect best covered by a systems programming manual. There's several unix books, Stevens being the goto guide historically, and I would go straight to the source instead.
>"You'll learn how to develop correct, portable, professional-quality code and build a foundation for developing security-critical and safety-critical systems."
So the stated aim of the book is to build a "foundation" from which you could then go on to digest and effectively use advanced things like the Stevens book.
Safety-critical and security-critical are interesting for an intro book. I'd like people to be aware, and conscious of of them. Anyways, we could debate this one. It really depends on how paranoid a beginner programmer becomes.
I learned from K&R, but highly recommend Seacord's books if you're looking for how to write secure C and a more modern take on some of the trickier parts of C.
K&R is a good reference, but not good for learning the language IMO.
1: I love Plum's books (starting with learning to program in C), but can't recommend them since the language has changed so much in the 37 years since it came out.
Looking at the current language/job markets outside the center, I feel like we are hitting the same problems as in open source. People add C++ to every C job to have something with the same level of innovations going on as new languages, even if it is about Linux embedded and you wouldn't let a C++ construct near the system.
I guess all these AVR, STM32 and ESP2866 devices running C++ are to be discarded too ?
But after initial turbulence, life has gotten much simpler (and dare I say quiet) for our HR since we moved to using the stock job profiles shipped with the platforms we buy.
We'll also be pushing out a promotional offer on this book, likely this coming week so you may want to wait for that. Just trying to keep things on track now that our company is completely remote and physical book stores are closing left and right.
I ask because there's 3 or 4 titles on your "coming soon" page that I'm very interested in but I have limited funds for a purchase. Cheers.
I'm chatting with Robert about whether there's a chapter that he's comfortable releasing now. If not we should have something Monday or Tuesday and the entire book within a week or so. I need to confirm on Monday.
The 30% off coupon listed on the books page is good on the print purchase, which also includes all the ebooks.
Okay, I realize "profusion" might seem a bit overblown, but honestly, in C world (and in comparison to other languages), this is practically a publishing boom.
Not that I'm complaining; C was my first programming language (back some time in the mid-90s), and it's still my favorite. But I wonder why we're suddenly getting new books on it? The language itself hasn't undergone any substantial changes recently, and if anything, "memory safety" is all the rage -- a thing that C most assuredly is not.
If this can get the non-C programmers to learn and understand the usefulness of a simple, minimal and direct language ("modern" languages are just too bloated), it is well worth it.
I have said it before and say it again; C will allow you to program everything from itty-bitty MCUs to honking server machines. It is also the de-facto universal "glue" language and its real-world benefits far outweigh any perceived difficulties.
Bottomline - Every programmer should know C whether you use/like it or not.
It's got interactivity (bundled compiler/interpreter in one system), a large number of implementations (possibly more supported CPUs than C++, but that might be more due to its age), and a Forth system is easier to "pick apart" and learn how its compiler's implemented than a C compiler.
Given the proliferation of processors everywhere, it is of immeasurable value to a programmer to become familiar with one "universal" language/runtime/toolchain so that he can program literally anything. It simplifies "incidental complexity" (avoids "tower of babel") which is key to getting things done.
E: Also, C can be run anywhere. ANYWHERE. It can run on home computers and consoles from the 80's! I like that fact.
21st century C is 2014(?) and Understanding and Using C Pointers is from 2013, so it is not exactly sudden profusion.
Edit: Using C Pointers is from 2013
had that happened a while ago, we would never have seen php et al.
There is absolutely no reason why a website should be build with c, sry...
Edit: link to rasmus lerdorf on 25 years php: https://m.youtube.com/watch?v=wCZ5TJCBWMg
Please tell this Felix von Leitner (fefe.de, blog.fefe.de), and post his reply in this forum.
His blog software is written in C. It's one of the most frequented blog in German speaking countries and running on a single machine (AFAIK).
Of course, not everybody has the expertise to build something like a blog in C.
This is howevwr not a good example you know that right?
I don't think that it is a question of expertise, more like choosing the right tool for the right problem.
ZetZ -- Symbolic Verifier and Transpiler to C. https://github.com/aep/zz and previous discussions, https://news.ycombinator.com/item?id=22245409
Zig, https://ziglang.org/ https://hn.algolia.com/?q=ziglang
Rust is more of a C++ competitor, https://www.rust-lang.org/
And Dlang has a regime where it can be used w/o a GC. https://dlang.org/spec/garbage.html
I left out any language which forces the use of a garbage collector.
Tried twice, couldn't solve it.
If you're trying to sell something, make it easy to get my money.
(And it doesn't matter if it's a physical book or e-book, the cost of printing nowadays is ca. $1 for each 100 pages.)
When our authors ask how long their book should be I always say: Long enough to cover the subject, short enough to keep it interesting. My company is called No Starch Press for a reason. Think of the word starch as a nicer way to say "BS", as in No BS Press.
There's a lot of work behind these pages as with all of our books. Unlike any publisher in this field we have several people who read and craft every line of every book as necessary, together with each author, before a book goes off to a copy editor. That's where most books start but not hours.
The real cost in creating a good book is not in the paper. It's in the time it takes to actually craft the words.
I know the Rust book is already translated in some languages. I even started the Esperanto translation myself, but had other priorities in the mid time. Doing that in spare time doesn't help to accelerate the process, but I already achieved the translation of the Lua reference manual this way.
If you are interested, just send me an email through mathieu at culture-libre dot org, or reply to this message with some instruction on which canal you would prefer to use.
Resume-wise, I've been writing C code since 1985, and I've been an expert on the C Standards Committee (WG14) since 2004. I've written two prior books on C Programming including "Secure Coding in C and C++" and "The CERT C Coding Standard". I also teach Secure Coding in C for NCC Group https://www.nccgroup.trust/us/our-services/cyber-security/se... and I also taught these topics to Computer Science undergraduates and graduate students at CMU for over a decade. So I think I have a good balance of technical skills and communications skills, but you know, pick up a copy and judge for yourself.
I'll buy an extra copy if you can get 'defer' in there too!
Those two features would really be the bee's knees :)
Re: developing a paper for constexpr, that's an interesting idea. I hadn't really thought about writing one up myself but it'd be an interesting thing to try out. Can you recommend a good example proposal that I could read? Shoot me an email at nicholas dot clark on the gmail if you ever see this.
But it's also not exactly -complete-. The standards committee makes changes every few years, including language additions.
I'm a fulltime C programmer, and I would _love_ to have both of the features I suggested - for the following reasons.
- defer: I could defer a free() statement after every malloc, guaranteeing that I won't miss one or forget about it. Lots of memory leaks and file-descriptor leaks could be easily avoided if C provided 'defer' as a language feature. GCC already offers something kind of similar with its __cleanup__ attribute, and a lot of programs rely on it. How much better would it be for the language to support it natively?
- constexpr: I am _so tired_ of writing really complex macros. Like say I want to populate a lookup table for a sine-wave (for fast use in a realtime system). Wouldn't it be nice if I could just populate my lookup table with a constexpr function? Then I wouldn't need really nasty macros, and I'd also be able to ensure that the calculations all happen at compile-time.
Now that C allows mixing variable declarations in with statements, it becomes annoying that variable declarations can not have labels. This hits particularly hard with the switch/case, but can also apply with ordinary named labels. The syntax to work around this defect is ugly.
The gcc extension for case ranges is really valuable.
Setting the sign bit via a cast should not cause undefined behavior. (for example, going from uint32_t to int32_t) It should just work in the obvious way. Avoiding the problem requires extremely strange code.
I'd like a way to prevent arrays from being replaced by pointers. Assignment could work. Sometimes I really want to pass an array to a function, and I don't mind if that means a megabyte is copied into the function args on the stack. Sometimes I really want to force the huge copy, and other times I'd rather have the compiler keep it in the caller's frame (but callee can mangle it unless it is const) and just pretend that the callee got more than a pointer. Array dimensions need to survive. The callee's prototype should be able to demand specific dimensions or receive them as variables, and the caller should be able to pass a portion of a larger array
The default function parameters of C++ would be useful. The UNIX API for open() would be best done this way, allowing a prototype without the need for stdarg. There doesn't seem to be any reason why default parameters would have to be at the end; a pair of adjacent commas in the middle is a fine way to indicate that the default is to be used for that missing parameter.
It's time to standardize bitfield layout so that bitfields can be used in portable code for purposes like assemblers and disassemblers. (in other words, not for purposes like access to MMIO registers) Microsoft and GNU compilers are already quite compatible on x86_64, so that would be the basis of standardization.
When a bitfield happens to have the size and alignment of a normal integer type, it should be possible to take the address. The resulting type would be a pointer to the integer type of lowest rank having the correct size.
Anonymous unions and structs would be valuable in all scopes, including at file level. This would allow careful data organization to save space, improve cache locality, prevent undesired cache line aliasing, or allow the intentional aliasing of types. Current technology typically involves abuse of the linker, which is well outside the C language.
Being able to do something like an #include, but with a blob of binary data, would be helpful for initializing big arrays. Current practice is to have build scripts convert binary files to C, to have the linker do it, and to rely on assemblers with the capability. None of that is nice to use.
So that we don't have to invoke m4 or do nasty things with recursive macros, the preprocessor could support loops.
There are a few gcc extensions that make macros far more reasonable, including statement expressions and typeof. Add those.
This is what has annoyed me about pulp paperback fiction. They skyrocketed from $5 to $15 in 15 years. Then you can buy the eBook for $12 and feel like you're getting a "deal" on a product with near zero production costs.
Amazon may ultimately discount this from the list price as they usually do but they are currently deprioritizing books due to the pandemic.
When I see that Brian W. Kernighan releases a book, it's an instant buy for me.
So until a preview chapter or similar is available I’ll through this link to Modern C, 2nd Ed by Gustedt. It has been well received and I thought Jens writing style was solid.
I don’t know if it is the perfect second book to read on C, but it seems well paced and things are well explained.
Here is a newer article on Uninitialized Reads published by ACM
Jens and I are both on the C Standards committee. I would definitely say that our books serve different markets.
If you think there is going to be a large difference in applicability and focus when comparing your ‘Effect C’ and Jens’ ‘Modern C’ I am kind of excited, not just to get your book, but also that even though some people would like to see C be relegated to the trash bin there is still some life left in a language I enjoy using.
NoStarch never let me down so far, and paying USD 60 (or 40, or whatever) is such a marginal difference for a book you're going to spends dozens of hours on.
Is that a feature? I'm constantly frustrated by books that use many words to say little. Wasting hours on such books tends to be negative value, and I shouldn't buy them even for two cents..
Many books nowadays, also from NoStarch, are released as early access books. Your book does not run under the early access program, or at least not yet.
Is this a decision made by you as an author? Or is the kind of book not really suited for early access? I could imagine that introductory books work well as early access books; readers could work through the first couple of chapters, while later chapters are still being written or refined.
I probably won't be able to read it before July, anyway, because I have to finish my bachelor's degree first. But I think it's not a bad time for a financial contribution to my favorite tech book publisher, so I already bought the book "blindly".
The book is supposed to be around 270 pages, so verbosity probably won't be an issue.
I am right now working through "Python Crash Course" as an intermediate Python programmer. I very quickly read through chapters 1-8 so far and did all the exercises (rather simple ones), just to make sure that I'm not missing out anything on the basics. In the last couple of chapters, I actually picked up two or three pieces of knowledge I wasn't aware of. I spent around five to ten hours on this so far. This approach sounds terribly inefficient.
However, working through a beginner's book as an intermediate Python programmer, and only getting half a dozen of really new information out of it, gives me the confidence I need at this stage. So I don't consider it as a waste, but rather as an exercise in patience and repetition.
Why am I telling you all this? Because I think people overrate the monetary costs of books.
In this case, I'm definitely curious about the book and would like to skim through it just to satisfy that curiosity, but I have my doubts as to whether I'll get much out of it (I've been writing C for 15 years and I do it professionally). Maybe there are gems of wisdom (or things I've overlooked) in it that would make it worthwhile, but it's possible I'd just regret the time and money spent :-(
maybe older books are cheaper
It's a luxury item. For comparison K&R2 sells for $51.99 new on amazon.
I think a fair price would be $49.99.
Does anyone have pointers on where to start with actual embedded programming? I have a couple Arduinos and RPis laying around, but I'm wondering if there are more 'real' ways to do it.