Hacker Newsnew | comments | show | ask | jobs | submit | unwind's comments login

OSH Park (probably the most well-known such service) costs $10/square inch for a 4-layer board, for three copies of a board.

As a ballpark figure, this board should be no larger than 1/5 square inch, so that's $2, for three boards.

Agree on low volumes of the other components, although things like Seeed Studio's Open Parts Library (http://www.seeedstudio.com/depot/OPLopen-parts-library-catal...) is pretty nice for the very basics.

Update: Didn't check if this board meets OSH Park's limitations.


In general you can't really do bga with osh park without fudging their specs significantly. Some have been able to do it but you tend to end up with peeling traces. Also, don't forget shipping on all the bits and pieces.


I (scare-)quoted the price in the title now, to indicate that it's not very certain.

The FPGA in question seems to cost less than $2 here in Sweden in unit quantities (from Digi-Key: http://www.digikey.se/product-detail/en/ICE40UL1K-SWG16ITR50...), so at least that component by itself doesn't have to blow up the parts cost.


Mods, please add the missing trailing 'l' to the title. Thanks.


I disagree.

All you need to care about for cases like these, when you're talking about the size of something, is that both malloc() and new[] handle allocation size using size_t.

That, to me, says pretty clearly that "the proper type to express the size, in bytes, of something you're going to store in memory is size_t".

It can't be too small, since that would break the core allocation interfaces which really doesn't seem likely.

You don't need to know how many bits are in size_t all that often, and certainly not for the quoted code.


For cross platform interoperability, API with the exact size type helps remove any ambiguity. Using size_t might be fine for intra-process usage, but as soon as we are dealing with data across platforms, exact size type definition is a must.


I see it the other way around. How many bits you need to address something in memory depends on the platform. Thus `size_t` is the only cross-platform type you can use. A fixed-size integral is going to work on some, but not all.


> Using size_t might be fine for intra-process usage, but as soon as we are dealing with data across platforms, exact size type definition is a must.

I don't know why you are downvoted, but this is very important.

Never send anything "on the wire" (or to a file) unless you know its exact size and endianness.


That is correct. For file formats and packets etc you must use exact sizes.

However for cross platform support using size_t in an API (as in what is exposed via .dll or .so) is a must. It's exactly the correct way to write cross platform code.


A big part of the data I work with needs to be serialized and cross platform, having explicitly sized types is the only way to keep sanity.


Sounds like you are mixing up you data's in-memory representation with their storage/transmission representation. This is risky business.

If you have no requirement that says otherwise, you should have an explicit marshalling and demarshalling steps that transform your live data objects into opaque BLObs. It would be highly desirable if your BLObs have some header that contains metadata to be used exclusively for marshalling purposes, at the very least size of the payload, object type id and format version id will save you lots of trouble.

Now what happens if you need high performance and are willing to trade of code complexity for faster execution. You can just copy your native object's bytes into the BLOB payload, just as long as you can correctly identify the source platform's relevant characteristics in the header. Then when the target host does the demarshalling step, it can decide if the native format is compatible with it's own platform and just copy the payload into a zeroed buffer of the correct size. If that its not the case, it will have to perform and extra deferred marshalling step to put the payload in "canonical" format prior to demarshalling proper.

You can even make the behavior configurable, so that customers running an heterogeneous environment do not suffer a performance hit for the sake of the customers in homogeneous environments.


Of course the data in storage or over the wire needs to be marshalled and unmarshalled (whether explicitly standardizing on a particular wire format or with header based hacks or whatnot). That's not the point.

The point is that a lot of the times, the two machines on either end of the wire need to agree on sizes of various fields you're sending (say in protocol headers). And then you want to work with that data internally in the code on either side. You better be absolutely sure how many bits you have in each type that you're allocating for these purposes.

And going even beyond that, very common, use case -- a lot of code reads cleaner and lends itself to debuggability when you know the exact sizes of the types you're using. It's not something reserved for just network programming.


Sorry, I fail to see the point in your second paragraph. Of course in the business logic level you need to allocate variables that can hold every possible value in the valid range, but as long as this is the case, why does it matter that you use types that have the same byte size in every possible platform?

In your third paragraph, i agree on the debuggability front (if you are actually reading memory dumps, otherwise, why should it matter). About the code reading clearer, I guess this is more a matter of taste.


It matters because of code readability, debuggability and all sorts of code hygiene reasons. If I'm using size_t for a field in my protocol on a 32 bit platform on one end and 64 bit platform on the other, which size wins over the wire? Can that question be answered while in debugging flow trying to track down a memory stomping error?


This seems related: http://www.gnu.org/prep/standards/standards.html#index-contr....

That's a part of the GNU Coding Standards which say:

Please use formfeed characters (control-L) to divide the program into pages at logical places (but not within a function)..

I always found that particularly archaic.

And yes, of course I realize that vertical tab and form feed are distinct characters.


^L is supported in most pagers and news clients to split pause scrolling, so this makes sense.


Obviously it's helpful for printing things.

Less obviously, emacs has commands that navigate by logical pages (C-x [, C-x ]). And of course you can adjust the regex that denotes logical pages.


And that flat fee is $5?!

That sounds really, really cheap. I'm so jealous right now.


And they recently announced they bought out the store next door! They'll be knocking the wall down and expanding, I can't wait.



There have been some minor additions, he said: the J2 adds four new instructions. One for atomic operations, one to work around the barrel shifter, "which did not work the way the compiler wanted it to [...]

Is so intriguing! Does anyone know what was wrong with the original barrel shifter design? I tried reading up on it but failed to find much reference material. I followed the link to the J-core community site to read the code, but it wasn't immediately browsable, just available for download.

I assume there were compilers for SuperH back in the day, didn't they use the shifter? Why not fix the compiler to teach it the existing instruction, rather than adding an instruction just for this? How wrong can a shifter be, really? The questions just heap up.


Compilers did use the shifter. I don't know if this is exactly what he was referring to, but one oddity with the SH4's dynamic shift instruction is that it only shifts to the left (there are also a limited number of shift-by-small constant (1,2,8,16) amount instructions). To shift to the right, you have to first negate the shift amount, then preform a left shift. So if use did a right shift by a non-constant, you would always see a negation of the shift amount before the shift. My guess as to why it was implemented like this was that since the SH4 had a fixed length, 2-byte instruction set, running out of possible instructions for future expansion was a real hazard, and not encoding both directions was done to save space.

On the original SH4 implementation, under certain conditions, there had to be one cycle in-between when a shift-amount was generated and when it was used, otherwise there would be a one-cycle CPU stall. A real right shift would avoid the need to schedule around this stall. This isn't necessarily something that needs an extra instruction to fix, the implementation could be designed to not need the stall, but it might difficult to work around. I don't to circuit design, but dynamic shift instructions typically look at as few bits in the shift amount to simplify and speed up the design of the shifter. The reason for the delay in the original SH4 is probably because it analysis and tags each register with information for the correct shift direction and amount, and certain units won't have this information ready for the shifter in time, hence the stall if the shift is too close the shift amount generation. (I've read this certain CPU implementations have done similar work in tagging if a register is zero or not, in order to help keep branch-on-zero/not-zero instructions quick.) If the instruction talked about is a dedicated right shift, it could be defined in a way that doesn't need a negation and extra tagging, would be much more compiler friendly, and faster.



Does that mean that the shifter is actually capable of doing rotates? Otherwise the negation part doesn't make any sense.

If you have 0xf0 and want to shift it three bits to the right to get 0x1e, no amount of negated-amount left-shifting is going to do that unless the instruction is a rotate.

If, on the other hand, you can do a 8-bit rotate left of 8-3 = 5 bits, that would produce the same result and need that "negation" (which is actually an inversion).


Industry Standard Architecture was the standard bus interface in PCs, before PCI et cetera.

So the term has certainly been used a great deal with that meaning, although it's of course a typo in this context where it should mean Instruction Set Architecture.

See https://en.wikipedia.org/wiki/Industry_Standard_Architecture.


Right. I forgot the Not Micro Channel Bus. ;)

Next I'll be forgetting what PCMCIA stands for, I guess.


> what PCMCIA stands for

I will remember this to the day I die: People Can't Memorize Computer Industry Acronyms. ;)


For the very same reason of course? If you're on horseback near traffic, which isn't all that uncommon in rural parts, it'd be a good idea to make sure the horse is visible.

There's a road sign for this too; https://en.wikipedia.org/?title=Warning_sign#/media/File:Swe....


Couldn't resist making a GIF of that horse glowing silver:



Actually it invokes undefined behavior when it passes a float to printf() and then tries to print it using %d (which of course expects int). Don't do that.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact